2025
About

We invest in people that we respect, trust, admire, and who have set out to build iconic organizations in the most fundamental categories.

News

KoBold Metals discovers vast Zambian copper deposit

02/05

KoBold Metals has found Zambia’s largest copper deposit in a century. The company estimates that the Mingomba site in the northern Copperbelt province will become one of the world’s top three high-grade copper mines.

News

Synthesis School raises USD 12M

04/21

USD 12M to build an education system for collaborators.

theme shape animation
Core Theme

Fundamental Consumer Trends

Consumer technology and products are a meaningful part of any individual’s experience of the world. We invest in companies that enable consumers to have fundamentally better experiences and empower them to make better-informed decisions.

Berlin, Germany

scattered clouds 19°C

News

KoBold Metals valued at USD 2.96bn

01/01

KoBold Metals is leading the race for critical minerals needed for energy transition.

News

EUR 40M For Reusable Space Capsule

02/03

The Exploration Company Raises €40m For Reusable Space Capsule Platform, NYX.

Partner

Florian Schindler

Florian is a General Partner at July Fund. Florian is based in Berlin, Germany.

News

Sakana AI raises USD 100M Series A

09/03

Sakana AI raises USD 100M Series A.

News

The Exploration Company raises USD 160M

11/17

The Series B round will fund the continued development of the Nyx spacecraft, which will be capable of carrying 3,000 kilograms of cargo to and from Earth.

Research Overview Cards Research Overview Cards Research Overview Cards
Thesis

Always On Agents and Information Networks

AI-enabled workflow and productivity optimization

May 2025

Infrastructure Tech

In modern knowledge work, progress often depends on critical paths. These are sequences of tasks that must be completed in order for work to move forward. In many industries where people work standard business hours, progress is frequently blocked simply because someone is not available. AI agents can step in to operate asynchronously, handling tasks, coordinating follow-ups, and moving projects forward outside of human working hours. This allows organizations to reduce latency and maintain momentum across time zones and schedules.

Beyond task automation, AI agents can also help people focus on what matters most. One of the biggest bottlenecks in large organizations is not execution, but prioritization. Agents can analyze activity across systems, identify what is urgent or strategic, and surface the highest-impact actions for individuals and teams. This ability to guide attention is just as important as the ability to take action. When agents help both complete work and clarify what should be done next, organizations become more responsive, more efficient, and better aligned.

As adoption grows, these agents begin to form a network of intelligence across systems and teams. The impact becomes even more powerful when this extends beyond a single company. In many industries, work involves coordination between multiple businesses. AI agents deployed across an industry can identify where workflows intersect and where mutual dependencies exist. While each company is focused on its own goals, there are often shared constraints or timing issues that affect everyone. Agents with visibility into these broader patterns can help coordinate across boundaries and find solutions that are optimal for the entire network. This level of cross-company coordination leads to smarter decisions, fewer delays, and more resilient operations.

We expect this transformation to play out across a range of industries, but it will begin where the conditions are most favorable. The first are industries where decisions need to move quickly across time zones and where critical paths are high-frequency and time-sensitive. The second are industries where each company interacts with a wide range of external stakeholders and depends on constant information exchange to function. Logistics is a clear example, but similar dynamics exist in sectors like finance, supply chain management, and global professional services. These are the environments where AI agents can immediately create value and where industry-wide coordination is both possible and valuable.

July

Thesis

Gen AI Design Platform

Creative flow.

April 2025

Infrastructure

We are witnessing the emergence of a new generation of creative tools built natively around generative AI. These tools are not simply extensions of existing software but represent a complete rethinking of how creative work is done. They allow artists, designers, and other creators to interact with generative models in more dynamic, intuitive, and flexible ways, optimizing for controllability.

Legacy platforms like Adobe face structural limitations in adapting to this new reality. Their products are built around manual workflows and rigid UX patterns. This often creates friction when trying to incorporate generative capabilities. In contrast, new platforms are creating environments specifically designed to harness the power of AI from the ground up. These new platforms can move faster, experiment more freely, and respond directly to evolving user behaviors.

The Case for a Model-Agnostic Platform

We believe the long-term opportunity lies in building a platform that is model-agnostic. Different models will continue to perform better at different types of tasks. Some may be optimized for photorealism, others for stylization and animation. Rather than rely on any one model, the winning platform will allow users to access many and use them like creative tools, choosing the right one depending on the desired outcome.

Even if model performance begins to converge over time, there will still be a strong need for a unified creative environment. This environment should provide consistency, control, and flexibility across all media types and allow creatives to produce high-quality work without constantly switching tools or formats.

Open Strategic Question: Who Aggregates the Users?

One of the most important open questions is whether the models themselves will be the first to aggregate large user bases. Platforms like MidJourney and Runway have shown that high-performing models can attract a lot of users early on. As these user bases grow, those same companies may then decide to build full creative platforms on top of their models, combining performance with workflow.

This presents a risk. If user aggregation happens at the model layer, new platform-first companies may find themselves competing not only with incumbents like Adobe but also with model providers that are quickly becoming vertically integrated.

However, we believe it is still early in the evolution of this space. Most creators are still in an exploratory phase. They are playing with tools, testing capabilities, and not yet fully committed to any one workflow or product. Adoption today is wide but shallow. There is still a real opportunity to build a platform that becomes the standard by offering both breadth of capability and depth of control.

Balancing Exploration and Execution

The platform that wins this space must meet two very different needs. First, it must support experimentation. Creatives need freedom to explore, test, remix, and iterate without friction and explore best use cases for gen AI as they evolve. Second, it must support execution. Professionals who need to deliver finished work need tools that are reliable, precise, and efficient.

This is a difficult balance to strike. Lean too far into experimentation, and the platform may feel like a toy. Focus too much on structure and workflows, and it may feel restrictive or uninspired. The right platform will create a continuum where users can move fluidly between play and production, depending on their goals.

Strategic Opportunity

We believe the future belongs to platforms that:

  1. Rethink creative workflows around generative systems rather than adapting old formats

  2. Provide powerful yet intuitive control layers over model outputs

  3. Integrate a wide variety of models and modalities in a seamless way

  4. Remain flexible and adaptable as model capabilities continue to evolve

  5. Serve both casual exploration and professional execution without compromise

This is not a feature update to existing software. It is the foundation of a new creative stack. The companies that recognize this shift and build with these principles in mind have the chance to define the next decade of digital creativity.



July

Thesis

Government GTM Enablement

Government procurement represents one of the world's largest markets yet remains trapped in antiquated processes controlled by middlemen and consultants.

April 2025

Industry Transformation

Government procurement represents one of the world's largest markets – $665B annually in the US and €2T in the EU – yet remains trapped in antiquated processes controlled by middlemen and consultants. The status quo disproportionately rewards incumbency over innovation, with procurement cycles measured in years rather than months. Only organizations with dedicated government relations teams can navigate the labyrinth of compliance requirements, shutting out smaller innovators and ultimately delivering suboptimal solutions to taxpayers. This inefficiency isn't merely a business problem but a governance crisis: critical capabilities in defense, infrastructure, and public services are delayed or never implemented due to market access friction.

Recent crises have demonstrated that procurement can move at unprecedented speed when existential pressure exists. Operation Warp Speed compressed vaccine development and procurement from years to months. Similarly, the Ukraine conflict catalyzed rapid defense procurement innovations that bypassed traditional bottlenecks. These examples prove that the system can change – what's missing are the tools to make such acceleration the norm rather than the exception.

As we write in many of our memos that touch on the public sector, there is unprecedented urgency for procurement modernization if Western societies want to meet the current geopolitical moment. Both policymakers and agency leaders recognize that national security and competitiveness depend on tapping into broader innovation ecosystems beyond traditional contractors. 

This alignment of incentives creates a unique moment for transformation and for the companies that can drive it.

The convergence of two powerful forces – outside-in technological innovation and inside-out governmental reform – has created a perfect storm for disruption:

Outside-In: AI & Infrastructure Revolution → LLMs and specialized AI models can now interpret complex government requirements, generate compliant proposals, and navigate the bureaucratic maze with unprecedented accuracy. These systems can reduce proposal creation from weeks to hours while increasing win rates through intelligence-driven targeting. This has dramatically reduced the cost of building procurement solutions capable of ingesting, analyzing, and acting upon the vast corpus of government contracting data – from forecasts to awards to performance metrics.

Inside-Out: Reform & Existential Urgency → Defense budgets are expanding dramatically amid growing global instability, with unprecedented allocations for emerging technologies. Simultaneously, government-led reindustrialization efforts and the imperative for resilient supply chains are creating entirely new procurement categories. Policymakers across Western democracies recognize that procurement modernization isn't just about efficiency—it's about ensuring critical innovations actually reach government users. The upside for the controversial DOGE program is not so much that it cuts costs but that it creates pathways for genuinely disruptive solutions to enter government, which can actually demonstrate their value. As agencies face mounting pressure to demonstrate innovation and supply chain security, they're increasingly receptive to new solutions for procurement enablement.

The most compelling opportunities in government GTM automation will come from companies that deliver comprehensive solutions rather than point products. We see four main archetypes emerging:

Proposal Intelligence Platforms (Horizontal and Vertical): Span the entire pre-award process with various entry points – some focus on opportunity discovery and pre-RFP intelligence, others on proposal generation, and still others on specific verticals like defense or healthcare. By automating the most labor-intensive aspects of government contracting while maintaining human oversight where needed, these platforms can deliver 10x efficiency improvements while maintaining quality. The most sophisticated solutions map the full ecosystem of agencies, programs, budgets, and stakeholders, enabling proactive engagement rather than reactive bidding. The critical differentiation will come from proprietary datasets and domain-specific AI fine-tuning that general-purpose LLMs cannot match.

AI-Powered Compliance and Deployment Infrastructure: The regulations governing government contracting represent perhaps the most complex body of business rules in existence. Platforms that can abstract away this complexity through automated verification, document generation, and approval workflows enable even small businesses to maintain perfect compliance without dedicated staff. The winners will build API-first architectures that can serve as middleware between any vendor and any procurement system.

Procurement Marketplaces & Networks: Two-sided marketplaces that connect government buyers with pre-vetted vendors represent a fourth category with powerful network effects. These platforms standardize procurement workflows, reduce friction for both sides, and create new distribution channels for vendors previously locked out of government sales. By integrating AI-powered matching algorithms, they can dramatically improve the quality of vendor-opportunity fit while ensuring compliance with diversity and locality requirements. The most successful will operate as true networks, not just catalogs, facilitating partnerships among vendors to address complex government needs.

We think companies founded post-2022 may have structural advantages in this market: they're built natively for the LLM era rather than grafting AI onto legacy systems, they're unconstrained by technical debt from previous government contracting paradigms, and they attract talent with both AI expertise and government domain knowledge.

The business model innovation is equally compelling. Where traditional government sales consultants and middlemen charge high fixed fees regardless of outcomes, AI-enabled platforms can adopt performance-based pricing models that align incentives with their customers. By directly tying compensation to contract wins or a percentage of award value, these platforms can capture a fair share of the massive value they create while dramatically lowering upfront costs for vendors. This approach is particularly transformative for smaller companies previously priced out of government opportunities, expanding the competitive landscape.

While established players will certainly compete, the velocity of innovation, ease of deployment, and business model flexibility favor new entrants who understand the unique intersection of technology and government mechanics.

The US federal market offers the most attractive initial target given its scale, homogeneity, and budget predictability. However, the platforms with lasting power will be architected from day one to address the fragmentation of global procurement systems, positioning them to expand internationally as they mature. The companies that succeed will not merely digitize existing processes but fundamentally reimagine how public and private sectors collaborate, unlocking trillions in market value while enabling more responsive, effective governance.

July

Thesis

Full Stack Edge AI

Enabling the physical world action layer

April 2025

Infrastructure Tech

The ability to sense, understand, and act upon physical world data in real-time is becoming fundamental to competitive advantage across industries. While cloud computing enabled the software revolution, the next wave of innovation requires computing to move closer to where data originates—whether in factories, vehicles, remote infrastructure, or medical devices.

Traditional cloud-centric architectures face fundamental limitations: latency that makes real-time decisions impossible, bandwidth constraints that make processing increasingly rich sensor data costly, and privacy concerns that restrict what data can leave local environments. As the speed of machine reasoning and machine to machine communication increase (and as more intelligent/autonomous machines come online), this problem will become even more acute.

Edge AI—deploying full-stack intelligent computing in complex, contested, or remote environments—directly addresses these limitations. By processing data where it's generated, Edge AI enables real-time insights even with limited connectivity. Early adopters in manufacturing, logistics, defense, and energy are beginning to demonstrate that Edge AI can be far more than just IoT – which has largely failed to live up to expectations. We see several catalysts creating an inflection point for Edge AI adoption and impact. 

  • Hardware Innovation: AI-specific chips and accelerators have made high-performance computing viable in small, power-efficient form factors. What once required a data center can now run on a device the size of a credit card.

  • Data Gravity: The explosion of sensors is generating unprecedented amounts of physical world data. Moving this data to centralized clouds is becoming unsustainable.

  • Regulatory Pressure: Privacy and data sovereignty requirements increasingly mandate local processing, particularly for data like healthcare records or surveillance footage.

  • Reindustrialization Momentum: Manufacturing and supply chain resilience initiatives are driving investment in intelligent, autonomous systems that require edge processing.

We see several compelling opportunities for emerging companies: 

  • Full-Stack Edge Infrastructure: Building the "AWS of the Edge" for remote and harsh environments—standardized, deployable compute and connectivity solutions that bring cloud-like capabilities anywhere.

  • Edge Foundation Models: Small models capable of enabling, as one company puts it, “ChatGPT like experience without requiring internet”.

  • Edge Orchestration Platforms: Software that enables enterprises to manage and scale thousands of distributed edge nodes, solving the operational complexity that stalls adoption.

  • Edge Security & Privacy: Tools and platforms that ensure edge deployments are secure and compliant, particularly critical for regulated industries and sensitive applications.

We expect the most successful companies in this category to eventually span multiple capability areas highlighted above.

However, our initial hypothesis on where the most value will accrue pushes us to look in a slightly different direction. We are biased (and informed) by investments in companies like KoBold and Gecko (as well as businesses like Helsing) which have led us to believe that companies capable of building end-to–end, vertical-specific action layers – i.e. integrating edge AI into specific industry workflows (e.g., mining operations, hospital systems, defense operations) – will be the biggest beneficiaries of this momentum. 

Trust, domain expertise, and distribution along with business model optionality (i.e. the possibility of going full stack as KoBold has done) should help these types of companies escape the commoditization. 

Physical world data is a key pillar of our investment approach as it relates to both infrastructure and fundamental industry transformation – Edge AI represents a fundamental shift in how physical world data is harnessed, enabling the creation of truly intelligent systems that can operate autonomously and adaptively in real-world environments. The companies that successfully bridge the gap between sensing and action – becoming the systems of intelligence for their industries – will capture enormous value in this transition.

July

Thesis

Supply Chain "Virtual" Integration

Building the command centers for weaponized supply chains in an uncertain world

March 2025

"Your supply chain is your weapon". This statement is increasingly true for companies battling trade uncertainty and governments navigating rising geopolitical tensions. Yet most critical supply chains are hampered – and placed at risk – by siloed data systems, manual coordination, and a fundamental disconnect between visibility and action.

Large companies might understand their direct suppliers, but have limited visibility beyond that—leaving them scrambling when tariffs suddenly change the economic equation for key components or suppliers deep in their supply base are sanctioned without warning. Without proactive intelligence and execution capabilities, organizations are perpetually in reactive mode, lacking the “responsive tooling” needed to anticipate disruptions or rapidly reorient end-to-end operations when they occur.

Governments face similar challenges, often lacking insight into the capabilities of their industrial base. For example, the average German defense RFP receives just two bids, and 40% of all German public bids end up with only one bidder, largely because governments can't proactively identify suitable suppliers. This not only limits competition and innovation but also creates significant vulnerabilities around national security, energy availability, and healthcare access when rapid mobilization is needed.

Traditional vertical integration offered a solution but often proved impractical in a highly competitive, globalized economy. This, in turn, led to the “death of the industrial conglomerate” toward the end of the last decade as it became clear such operating models lacked the right feedback loops with the market.

Today, we believe a new paradigm is emerging: the “virtual integration” of supply chains, where AI paired with distributed sensing, production, and logistics capabilities, coordinate independent partners as seamlessly as if they were one unified operation.

Two major forces are creating a unique “why now” moment for new companies to emerge: 

  • Geopolitical and Regulatory Imperatives: Trade wars, export controls, and the push for supply chain sovereignty are forcing companies to completely reconfigure their value networks. At the same time, new regulations around forced labor, carbon emissions, and transparency are making deep visibility and control mandatory. In the public sector, governments (particularly in Europe) now recognize that industrial capability is fundamental to national security, with the lack of tools to map capabilities and mobilize suppliers creating critical vulnerabilities. These concurrent shifts demand execution platforms that can dynamically adapt networks, ensure compliance, and enhance national resilience.

  • AI and Edge Intelligence: LLMs can process massive amounts of unstructured data – from customs declarations to technical specifications to satellite imagery – making previously dark, inaccessible information actionable. Meanwhile, better asset tracking via sensor networks and edge computing is enabling companies to build "virtual redundancy" instead of physical redundancy. Platforms use AI to create self-correcting systems that continuously adjust supply and demand, enabling companies to operate leaner while maintaining resilience. By dynamically shifting resources based on real-time conditions, these systems effectively create redundancy through intelligence rather than excess inventory.

As a result of these catalysts, we have seen the market evolve in two phases:

Phase I → Insight-Layer Companies – Horizontally-focused companies aimed primarily at visibility and planning by providing information and analytics to aid decisions. They aggregate data and highlight issues like compliance violations, supply disruptions, or market shifts, but generally stop short of executing decisions. While valuable, customers increasingly find these horizontal insights insufficient, and integrating one-size-fits-all platforms with highly specific use cases can be cumbersome. What companies need isn't just awareness of problems but solutions to the.

Phase II → Action-Layer Platforms – Vertically-focused operating systems that span digital and physical capabilities and that autonomously plan, adjust, and execute supply chain tasks in real-time. These platforms integrate into procurement, production, and logistics processes, becoming the end-to-end execution system for specific domains:

  • Electronic Components: Platforms that connect engineering design through procurement and logistics, managing supplier selection, automating purchase orders, and ensuring compliance.

  • Specialty Chemicals: Virtual contract manufacturing solutions that connect customers with production capacity, handling everything from formulation to production and distribution.

  • Construction Materials: Platforms that optimize the fragmented and highly regional construction materials supply chain to improve material sourcing efficiency and enable traceability for sustainability certifications and carbon accounting.

  • Defense Industrial Base: Solutions that map capabilities, connect prime contractors with innovative suppliers, and provide governments with greater visibility into their industrial ecosystems while remaining resilient to emerging threats to key infrastructure and supply lines (e.g. electronic warfare).

  • Healthcare: Platforms that actively manage inventory across distributed locations, predict demand surges, and automatically rebalance supplies during crises.

We believe execution-focused platforms offer a particularly compelling investment case for several reasons:

  1. Higher Stickiness: Systems that run daily operations become mission-critical infrastructure with strong retention and pricing power.

  2. Direct Value Capture: By controlling actual transactions or interactions, these platforms can adopt value-based pricing tied directly to volume or outcomes.

  3. Strong Operational Moats: Execution platforms accumulate proprietary data and build valuable supplier networks that become increasingly difficult to replicate.

  4. Market Demand for Action: Corporate and government buyers increasingly need solutions that not only identify problems but solve them.

Supply chain virtual integration represents an opportunity to fundamentally transform how physical goods move through the world. The most successful companies will be those that move beyond visibility to actual execution, building defensibility through proprietary data, network effects, and deep workflow integration.
In a world where supply chains are weaponized, these virtual integration platforms are emerging as the command and control systems for the global economy – helping organizations navigate unprecedented complexity with greater agility, transparency, and resilience.

July

Thesis

AI-Enabled Regulatory Compliance

Turning document-heavy, manual processes into an automated, continuous function.

March 2025

Over the past decade, regulatory compliance has transformed from a back-office function into a strategic imperative. Shifting trade agreements, supply chains, and geopolitical alliances create real-time regulatory threats that need to move at the speed of software and AI. With global regulatory changes exceeding 200 per day and enforcement action intensifying, companies need more efficient approaches.

AI is emerging as the key enabler of this transformation, turning document-heavy, manual processes into an automated, continuous function. As the regulatory landscape expands beyond traditional domains into AI governance, ESG reporting, and data privacy, LLMs and advanced NLP can now parse complex regulatory text, automate monitoring of everything from communications to physical assets (effectively building a regulatory-focused digital twin of a global organization), and generate compliance content and actions at scale.

Companies that adopt such tools gain a dual advantage: protecting downside risk while using regulatory shifts to identify and exploit market opportunities faster than competitors. This transforms compliance from a defensive function into a strategic capability for navigating complex global markets. 

The most promising companies building AI for regulation-related use cases make this dual benefit a reality and share the insight that while solving compliance-related challenges is valuable, the long-term opportunity rests on the idea that regulatory compliance increasingly touches every aspect of an organization. By integrating deeply with core business systems and workflows, AI compliance tools create natural expansion opportunities extending beyond their initial wedge.

For example, a company that starts by automating regulatory change notifications can expand into policy creation, control testing, and risk management – moving from insight to the automated action layer for regulatory compliance and organizational governance. As these platforms ingest more data, they create network effects through anonymized benchmarking and best practices. In some cases, companies will play a role in actually shaping regulation, surfacing data and insights to lawmakers, and using AI to encode changes to make compliance more efficient. 

We observe distinct regional dynamics in this market, creating the opportunity for regional champions to emerge (much like in fintech). European companies, for example benefit from the EU's aggressive regulatory agenda, with deep knowledge of EU frameworks creating defensible regional positions. 

Our assumption is that the nature of the customer profile places a premium on clear, rapid ROI for specific compliance activities (e.g., 50% reduction in audit time or 90% faster regulatory analysis). This necessitates deep domain expertise and an integration-first architecture. We also see this as a category that will rapidly move toward consolidation. A small number of regional players will quickly gain momentum, while followers will be quickly acquired by incumbents.

We see this small subset of winners emerging as essential infrastructure for modern business – solutions that transform compliance from a cost center into a strategic advantage deeply embedded in how organizations operate.

July

Thesis

Commodity Intelligence

Opportunities at the intersection of AI, supply chain disruption, and shifting trade alliances

February 2025

Infrastructure Tech

The rules of global trade are being rewritten (some would say fundamentally erased) in real-time. Escalating geopolitical tensions, shifting alliance structures, and weaponized trade policy are creating a new regime of persistent instability in commodity markets. Our working assumption is that his volatility isn't just a temporary disruption – it will persist. 

As a result, companies across the economy have been forced to reassess their relationships with commodity markets. 

Consumer goods manufacturers need better ways to manage agricultural and packaging inputs. Industrial companies are seeking more control over metals and energy exposure – from steel mills hedging ore to electronics manufacturers securing rare earths to automakers managing access to battery metals. Transportation companies must navigate volatile fuel markets. Energy producers want to capture more value from their physical assets. 

What was once a back-office function built on just-in-time procurement and undifferentiated supplier relationships has become a critical source of competitive advantage (or weakness).

As a result, companies that once outsourced market participation to traditional trading houses are now more actively developing in-house capabilities to manage their commodity exposure. 

This strategic reorientation parallels what we're seeing across the physical economy: as the real world becomes more measurable and markets more dynamic (i.e. our “Shortification of Everything” thesis), the gap between leaders and laggards will widen dramatically. Companies that build sophisticated capabilities to understand and act on physical world data will create powerful feedback loops – better data driving better decisions driving better data. Those that don't will face an increasingly insurmountable disadvantage as AI and automation accelerate the pace of market adaptation.

The emerging stack of technology solutions enabling this transition mirrors our Benchmarking the Physical World thesis. Just as standardized data and metrics enabled the development of modern financial markets, new platforms for understanding and pricing physical world risk are becoming critical infrastructure for commodity market participants.

Within this landscape, we can roughly divide the companies into a few areas, recognizing that there is significant overlap (and will be more as breakout companies expand) and that each industry has its inherent differences.

  • Trade Intelligence Platforms: Companies building proprietary data collection and analytics infrastructure to decode physical commodity flows. Winners start by aggregating hard-to-replicate datasets before layering on predictive analytics and trading signals. The sequencing opportunity lies in building deep, defensible information advantages in specific verticals to unlock expansion across commodity classes and deeper into the value stack.

  • Digital Marketplaces & Trade Execution Platforms: Platforms that modernize how physical commodities are traded, bringing efficiency, liquidity, and transparency to markets that traditionally rely on brokers, OTC transactions, or relationship-driven trading. These companies start by digitizing specific workflows (such as matching buyers and sellers, trade documentation, and contract negotiation) before expanding into value-added services like financing, logistics, and risk management.

  • Trading Capability Infrastructure (“Embedded Trading”): Software and AI-powered platforms that enable companies to build sophisticated trading operations from scratch. As more companies seek to capture trading margins around their physical assets, winners will combine deep vertical expertise with flexible technology that helps clients identify, execute, and optimize trading strategies while managing risk across their operations.

While financial markets have been transformed by algorithmic trading and real-time data, physical trading still operates largely through manual processes and personal relationships. The winners in this category will combine deep domain expertise with sophisticated technology for processing multi-modal physical world data. Those that succeed will become the foundational infrastructure for pricing and allocating physical resources in an increasingly complex global economy.

July

Thesis

Data-Driven Fashion

OrtegAI

February 2025

Digital Transformation

The fashion industry operates on a fundamental tension between creative vision and market reality, with traditional design-to-retail cycles spanning months and requiring significant upfront capital commitments before knowing if designs will resonate with consumers. Despite the rise of "fast fashion" players like Zara and H&M who compressed this timeline to weeks, the industry still faces persistent challenges of overproduction, waste, and missed market opportunities.

We believe an AI-driven approach represents a paradigm shift in how fashion products move from concept to customer. Unlike other design fields that require complex engineering validation, fashion design outputs can be rapidly translated from compelling visuals to manufacturable products, creating a unique opportunity for end-to-end AI transformation.

This transformation will unfold across three interconnected domains:

  1. Constraint-Aware Generative Design: AI systems that can help designers go from sketch/idea to full-fledged design visuals faster while considering the supply chain and cost constraints inherent to said designs.

  2. Pre-Production Analytics and Validation: Using AI-generated renders and photography as market sensing tools before committing to production. This includes digital showrooming, social media testing, and other data-gathering approaches to validate demand before manufacturing begins.

  3. Supply Chain Integration and Rapid Production: Direct connection between validated designs and manufacturing partners, with automated translation of designs into production-ready specifications for quick turnaround.

The convergence of these capabilities creates the foundation for a new class of fashion business that can dramatically reduce time-to-market, minimize inventory risk, and better align production with actual demand.

Note: This approach builds on innovations pioneered by Ortega (Zara) in traditional fast fashion, but supercharges them with AI to achieve unprecedented speed and precision in market response.

Technology Enablers

Several technology breakthroughs make this vision increasingly feasible:

  • Advances in generative AI for fashion design now produce photorealistic renderings that can accurately visualize garments in various styles, materials, and on different body types.

  • Machine learning systems can now incorporate manufacturing constraints directly into the generative process, ensuring that designs respect material characteristics, production capabilities, and cost targets.

  • Digital twin technology enables virtual sampling and fitting without physical prototyping, compressing development timelines.

  • Supply chain digitization and API-first manufacturing partners create the potential for seamless handoffs from design to production.

Market Evolution

We see three approaches emerging to leverage these technologies:

  1. AI Design Accelerators: Tools that significantly speed up the design process while respecting manufacturability constraints. Early leaders will focus on specific categories where design-to-production translation is most straightforward. 

  2. Supply Chain Orchestrators: Platforms that manage the production side. From real-time information on production availability, prices, etc., to automated production and logistics.

  3. Full-stack platforms include everything to either launch there own brands or enable others. We believe that these could increasingly empower new types of brands (e.g., creators) with minor technical design background and potentially even enable users to create n-of-1 pieces. 

Key Success Factors

We believe successful companies in this space will demonstrate:

  1. Constraint Engineering Excellence – The ability to translate manufacturing realities into parameters that guide AI design generation is a core differentiator.

  2. Supply Chain Integration – Deep relationships with manufacturers who can rapidly produce small runs of validated designs will be essential to realizing the full value proposition.

  3. Brand Position – While the technology enables rapid iteration, successful companies will still need a coherent brand identity to guide design direction and customer expectations.

The most compelling opportunity may be in platform companies that can build the infrastructure connecting AI design capabilities, consumer testing channels, and manufacturing partners. These platforms could either operate their own fashion brands or provide technology to existing brands and retailers, potentially creating network effects as they accumulate design, consumer preference, and manufacturing data. As with the evolution of fast fashion in previous decades, we expect the first breakout company in this space to combine technological innovation with strong operational execution and distinctive brand positioning.


July

Thesis

“Plumbers” of the Reindustrial Revolution

Like traditional plumbers, these companies are focused on high-stakes problems where failure carries outsized consequences.

January 2025

Industry Transformation

While neo-Primes and OEMs capture headlines and venture capital flows, specialized players solving critical service, infrastructure, and component-level problems will be fundamental to transforming the physical economy. 

We call these businesses the "Plumbers" of the Reindustrial Revolution because, like their namesakes, they occupy an unglamorous but essential (and hard to dislodge) position in their value chains. These companies are modernizing playbooks pioneered by industrial giants: Westinghouse in critical components, Bureau Veritas in trust and data, Schlumberger in technical services, and Grainger in supply chain orchestration. 

Like traditional plumbers, these companies are focused on high-stakes problems where failure carries outsized consequences. Their businesses are built first on technical mastery and reliable execution, which fosters deep customer trust and loyalty. Competition remains limited not just because of technical complexity, but through the “niche” nature of their markets – rational actors won't deploy massive capital to displace established players in constrained categories like they might in unbounded markets. This creates a foundation for expansion into adjacent opportunity areas – deepening existing customer relationships or extending technical capabilities to expand TAM over time. 

A key theme across much of our research is how geopolitical competition is redrawing supply lines and catalyzing efforts to rebuild industrial capacity in Western markets. The existential threat motivating this has been a potential conflict with China. But even in a positive scenario where kinetic conflict is avoided – and even as “expected unexpected” events like DeepSeek’s R1 impact the Capex and energy equations – we (and others) believe the trend towards spending on reindustrialization will continue. 

Thus far, the narrative surrounding the Reindustrialization tailwind has primarily benefited companies at the "front end" – next-gen OEMs, new Primes, and companies building brands easily understood by generalist investors that control most of the growth capital in the ecosystem. This is reflected in valuations – the early champions have access to near-unlimited pools of cheap growth capital while earlier-stage players are priced at levels that assume near-perfect execution. While we share the market-level excitement about the new levels of scale the best companies in this market can achieve, we have been more circumspect in our approach to this category.

As competition continues to rise on the front end of the market, our hypothesis is that the most attractive risk-return opportunities will increasingly be found with the "plumbers”, which we see emerging across four primary categories:

Critical Components

Then → Westinghouse's air brake system, invented in 1869 as railway networks reached continental scale, transformed railroad safety and became an industry standard, which created the foundation for one of the largest industrial conglomerates of the 20th century.

Now → The new material, form factor, and communication requirements of modern aerospace and defense systems create opportunities for specialized component makers to become standards in critical subsystems, from wire harnesses to thermal management to energy storage. 

Trust & Data Engines

Then → Bureau Veritas built a global franchise by becoming the trusted verifier of maritime safety standards as international trade expanded rapidly in the 19th century

Now → The confluence of aging existing infrastructure and the need for new development creates opportunity at the intersection of novel inspection technology and data analytics to become the system of record intelligence for asset health, compliance, and built world capital allocation. 

Superdevelopers

Then → Schlumberger became indispensable by mastering the technical complexity of oil exploration and production when the petroleum industry was rapidly expanding into new geographies

Now → The energy transition as well as the emergence of “new Prime frontiers” (e.g. the Arctic and space) creates opportunities for companies that can i) develop proprietary technology suited for challenging environments, ii) develop project execution capabilities to integrate other solutions, and iii) master the regulatory complexity of operating in new areas. 

Supply Chain Orchestration

Then → Grainger was founded in the 1920’s to provide customers with consistent access to motors as both the consumer and industrial markets for automotive and other powered machinery exploded.

Now → Electrification and UAV growth are driving demand for components like batteries, which are largely controlled by China and at increasing risk of tariffs and blockades. This creates new opportunities to build marketplace infrastructure for “democratic supply chains” and better supply chain coordination

Across these different pathways, we think successful companies will share several characteristics:

  1. Natural capital efficiency and organic growth – Sharper focus avoids growth at all capital strategy and expansion plans, fostering a more sustainable model for sequencing market opportunities.

  2. Rational competitive landscape – Perceived (initial) market sizes typically don't justify massive capital deployment by new entrants or existing players, while technical expertise and regulatory requirements create genuine barriers and, in some cases, help companies aggregate a portfolio of “sub-scale monopolies”.

  3. Value accrues to expertise (i.e. Process Power) – Deep knowledge of specific systems, regulations, or technical requirements becomes more valuable as complexity increases and companies either work across a broader segment of the overall value chain or integrate deeper into customer operations. 


1. The EDA market is one of the best examples of this. Companies like Cadence and Synopsys are both ~ $80b and relatively insulated from competition b/c their TAM (as a % of the overall semiconductor market) and their cost (as a % of overall the overall semi conduction design and fabrication process) is small. From NZS Capital:

“As they're successful, they're able to layer on these new businesses that are really additive to the overall business. So they may not even be increasing in price, in a lot of cases, just selling more functionality, because chip designers need it. And it's a really important point to underscore that we're talking about this 550 billion TAM of semiconductors, and the TAM of devices on top of that is another step function. It's being enabled by this sort of 10 billion EDA TAM. It's really small, when you think about what they're delivering.”

“But the idea that more EDA could come in-house over time, it just seems really unlikely to me, in part, because it's just not a huge pain point for the customer. It's 2% of their sales, and they just get so much value for what they're giving, versus the effort to re-engineer all this stuff that's been created over the last few decades.”

2. Much like last decade where being the Uber or Airbnb for X was an unlock for high-priced early financing, the same is true today of companies promising to become the Anduril or Palantir for X.

3. This relates to our thinking on AI-enabled asset ownership/buyout opportunities.

July

Thesis

Transforming Clinical Trials

How can we massively speed up the timeline – and reduce the cost – of bringing new drugs to market?

January 2025

While the interplay of AI and better data is (finally) beginning to deliver on the potential of dramatically expanding the therapeutic opportunity space, these breakthroughs risk being stranded or significantly delayed without a transformation of the clinical trial process.

We believe several factors have converged to create an exciting ‘why now’ for new companies building new clinical trial infrastructre.  

  1. The post-COVID regulatory environment and evolved operating procedures have created a unique window for reimagining clinical trials. 

  2. Remote monitoring, decentralized trials, and real-world evidence have moved from fringe concepts to validated approaches.

  3. The explosion in AI-discovered therapeutic candidates is creating pressure to modernize trial infrastructure for both human health and economic reasons – it is estimated that the cost of clinical trial delays can be on the order of millions of dollars per day

Our initial hypothesis is that winning companies will possess the following characteristics.  

  1. Vertically integrated, building parallel infrastructure instead of patching the existing system. The complexity and interconnectedness of clinical trials mean that point solutions will struggle to drive meaningful change. For n-of-1 companies to exist in this space they need control over the full stack – from patient recruitment through data collection and analysis. This approach is about more than technological self-determination. It also positions companies to innovate on the financial model of clinical trials towards better alignment among all of the key stakeholders (i.e. risk/upside sharing).

  2. AI (and post-COVID) native, designing their processes around modern capabilities rather than retrofitting them onto legacy approaches. This means leveraging AI for everything from protocol design to real-time monitoring while embracing decentralized/hybrid trials and remote data collection as first principles rather than accommodations.

  3. Built to capture the growth of AI-driven drug discovery (i.e. new companies) rather than competing for share in the traditional clinical trial market. This allows them to sidestep entrenched competitors to work with customers operating with the same true north of speed and technical advancement.

July

Thesis

Off-Road Autonomy

Reversing this physical world stagnation represents one of the largest economic opportunities of the coming decades.

January 2025

Infrastructure Tech

The Western infrastructure crisis is about more than aging bridges and roads (and elevators) – it's about our capacity to build, maintain, and modernize the physical systems that underpin productivity, economic growth, and strategic sovereignty. From critical mineral extraction for the energy transition to military logistics modernization to the massive manufacturing capacity needed to achieve reshoring objectives, we face unprecedented demands on systems that have seen little innovation in decades.

Reversing this physical world stagnation represents one of the largest economic opportunities of the coming decades. This is reflected in our work from several angles – most notably our investments in KoBold and Gecko, and through category research into energy infrastructure, sustainable construction, and defense.

It is easy to blame this stagnation on a lack of investment or an absence of vision among industrial (and bureaucratic) operators. But these are symptoms of the fact that physical world modernization – both digitization and automation – is not a monolith and a vast majority of the work that needs to be done is a fundamentally harder problem than commonly understood.

The environments where we have most significantly slowed down and thus where we most need automation – sectors like construction and defense as well as settings like logistics yards – are characterized by high situational diversity: dynamic conditions, variable tasks, and diverse equipment fleets that often stay in service for decades. While continuous process industries like chemicals and manufacturing have made significant strides in automation, these high-diversity environments have remained stubbornly resistant to transformation.

Automating heavy industrial vehicles – earthmovers, mining equipment, military Humvees – represents an important step to mastering these environments and fundamentally transforming the productivity equation in these industries. While much of the discussion around physical world automation has centered on robotics or on-road consumer autonomy (Waymo, Tesla, etc.), these vehicles sit at the intersection by unlocking both autonomous mobility and task execution/manipulation capabilities. They are the workhorses of our industrial system, will continue to be for a long time, and are just now starting to become equipped for autonomous operation. 

"Today you have a few thousand [autonomous] vehicles in mining, you have a few hundred vehicles in ag, you have dozens of vehicles in other verticals. I think we're really at the starting line now. Ag, for example, is nearly 3 million tractors. Obviously only a small percentage of those are big enough or productive enough to be automated. In construction equipment there's a million plus units. You look at something like mining, there's something like 60,000 dump trucks. So those are your upper bounds. But today the biggest successes are in mining where you've got north of a thousand units deployed, which, when you compare to on-road, is in a similar realm." – Sam Abidi, Apex Advisors

Technology Tipping Points → Our robotics research leads us to believe that the category is approaching (or reaching) technological tipping points on several fronts. While on-road autonomy has focused on well-marked roads and predictable conditions, industrial autonomy faces fundamentally different challenges. These environments demand systems that can handle unstructured terrain, weather variations, and complex interactions between vehicles, machines, and humans.

Several technological advances are converging to finally make this possible: 

  • Visual language models (VLAMs) and advanced perception systems that can understand both geometric and semantic elements of complex environments

  • Mapless localization capabilities that enable adaptation to rapidly changing conditions without relying on pre-existing maps

  • Improved sensor fusion that can differentiate between traversable elements (like foliage) and true obstacles while understanding surface characteristics

  • Edge computing architectures designed specifically for ruggedized, industrial deployment

  • Robotic hardware improvements (e.g. dexterity) that can be incorporated into autonomous systems to unlock end-to-end operational capacity.

Talent and Capital Momentum → Along with the technological building blocks for this category, the talent seeds were planted over the last decade as capital and big visions fueled the first wave of autonomous vehicle company building. Frustrated by autonomy regulation and other bottlenecks, founders and engineers started to look for opportunity areas where product roadmaps – and commercial models – could be realized in 2-3 years rather than a decade. This led many to off-road autonomy – despite the much smaller TAM – and has led to a flurry of company formation and funding in the space. 

Investibility – From Action Layer to Systems of Collaborative Intelligence → Building on our thesis in vertical robotics, we see retrofit industrial vehicle autonomy as a powerful near-term lever for modernizing infrastructure. The economics are compelling: retrofit solutions can deliver substantial cost savings versus new autonomous vehicle purchases while allowing customers to preserve their existing fleet investments, which often have 15-20+ year lifespans.

We see a clear sequence for how companies build defensible positions in this category:

1. Action layer as a go-to-market wedge:

  • Target 80-90% automation of common tasks while preserving human oversight

  • Lead with collaborative service model combining autonomy systems with expert support

  • Focus on high-ROI use cases where service model can support end-to-end execution

2. Systems of Record

  • Proprietary datasets around vehicle performance, environmental conditions, and task completion

  • Fleet management and analytics capabilities that span multiple vehicle types/brands

  • Data-driven maintenance and operations optimization

3. Systems of Collaborative Intelligence

  • Coordination and resource planning across operators, vehicles, and robotic systems

  • Serve as integration layer for next-generation capabilities, whether built internally or via partners

  • Consider deeper integration (going beyond retrofitting) to increase system-level advantages


This follows the progression we expect to see Gecko take in the data-driven (and increasingly automated) inspection market and is being proven out now by off-road autonomy companies like Outrider, which has expanded from electric yard trucks using a patented robotic arm to a full suite of site infrastructure and logistics operations management systems. It is worth noting that we believe this same sequencing may not hold when selling to militaries who tend to be more concerned about vendor lock-in and thus less receptive to “operating system” style offerings. 

Still, we believe companies operating purely at the "action layer" will have limited long-term defensibility and will require companies to uplevel their capabilities over time. The path forward also likely includes hybrid models – as evidenced by Caterpillar and Teleo's approach of using remote operation as a bridge to full autonomy, allowing skilled operators to work from anywhere while systematically identifying repetitive tasks suitable for automation.

This progression allows companies to build trust through immediate value delivery while laying the foundation for deeper workflow transformation. The key is maintaining the flexibility to evolve alongside customer needs and technological capabilities rather than forcing premature standardization.

We are particularly interested in companies targeting:

  • Heavy industrial operations (construction, or mining, and agriculture depending on use case) where environmental variability is high but equipment standardization is low.

  • Military and defense logistics, which require operation across diverse terrain with mixed vehicle fleets.

  • Port and industrial yard operations, where dynamic routing and complex interactions between machines and humans are the norm.

This thesis faces two primary risks. First, a breakthrough in robotics foundation models could make the retrofit/incremental approach less compelling, though our discussions with leading robotics companies suggest they are not underwriting dramatic commercial-level breakthroughs on even a ~5-year horizon. Second, growing concerns about AI's impact on employment could spark regulatory pushback, though acute labor shortages in these industries create powerful countervailing forces.

Overall, we believe the combination of sensing, decision-making, and physical execution in high environments represents an attractive wedge to become industrial operating systems in several categories.

July

Thesis

Personal Security

The traditional concept of security, once firmly rooted in the domain of the state, is undergoing a significant transformation.

January 2025

Fundamental Consumer

The traditional concept of security, once firmly rooted in the domain of the state, is undergoing a significant transformation. Individuals are increasingly taking responsibility for their own safety and well-being, driven by a confluence of factors, including rising crime rates, the proliferation of cyber threats, and a growing awareness of the limitations of state-provided security in the digital domain. This shift is particularly evident in the digital realm, where the rise of sophisticated AI-powered scams and the increased abundance of personal data online (both shared knowingly and unknowingly) and its value have created a new era of individual responsibility. We believe that as individuals become more proactive in managing their own security, the personal security market is poised for significant growth, offering a wide range of opportunities for companies that can provide innovative and effective solutions.

This finds its manifestation in the proliferation of data breaches and spam calls, has become a major concern for individuals and businesses alike. In 2023, approximately 56 million Americans lost money to phone scams, with total losses reaching an estimated $25.4 billion annually. These scams often involve impersonating loved ones or authority figures, leveraging highly personal information to solicit urgent financial assistance or sensitive information.

This is exacerbated by the fact that scams and misinformation campaigns will only become more sophisticated from here on forward as they leverage AI-powered voice cloning and deepfake technology. This starts what we often refer to as an evolutionary arms race between the deceiver and the detector. In this environment of heightened risk and uncertainty, individuals take a more proactive approach to their security. 

Moreover, as societies become more polarized, personal information is easily accessible, and doxing becomes more prevalent, we see this sense of perceived risk to also spill over into the real world. 

We believe that the opportunity can take various forms. From cutting-edge digital identity protection solutions to counter deep fake solutions to physical home security platforms, personal security companies are leveraging technology to empower individuals and provide a sense of control over their safety and well-being.

July

Thesis

The Robotics Smiling Curve

Embodied AI reallocates value from hardware to intelligent foundation models and specialized vertical solutions, fueling leaps in productivity across complex tasks.

January 2025

Infrastructure Tech

Where will value flow as embodied AI takes off?

We are convinced that AI, deployed in robotics systems with the unconstrained ability to navigate and interact in the physical world, will be one of the biggest unlocks of productivity and abundance in our lifetime. The convergence of tumbling hardware costs, breakthroughs in AI, and mounting pressure for automation across critical industries has created an unprecedented opportunity for transformation in how physical tasks are performed.

What started 50+ years ago with the optimization of rote industrial tasks has evolved through distinct phases: first, the automation of controlled, repetitive workflows like warehouse pick-and-place operations, and now, the potential to handle end-to-end responsibilities in complex, multi-dimensional environments—from factory floors to healthcare facilities to homes.

This evolution comes at a critical juncture. Labor shortages in key industries, aging populations, and shifting supply chains in response to climate change and geopolitical pressures have created an urgent imperative for modernization. In industrial settings, where ROI drives decision-making, robotics solutions are already catalyzing larger automation budgets. In consumer settings, where emotional factors play a larger role, mounting evidence (e.g. Waymo adoption) suggests growing readiness for automation in everyday tasks.

As with any market opportunity, we are interested in understanding which technological and commercial capabilities are most scarce (and thus most valuable) and along with that, which parts of the value chain emerging companies are best positioned to win. 

Technological Tailwinds

The massive talent and capital flows into robotics over the past few years have been catalyzed by an unprecedented convergence of technological breakthroughs. This convergence is moving robotics from a hardware-centric paradigm (led by companies like ABB and FANUC) to one where intelligence and deep workflow integration capabilities drive market power.

At the core of this shift is the emergence of multi-modal foundation models that sit at the intersection of language understanding, vision perception, and spatial awareness. As DeepMind's Ted Xiao observed in his survey of 2023's breakthroughs, we're witnessing not just technological advancement but a philosophical transformation: "a fervent belief in the power of scaling up, of large diverse data sources, of the importance of generalization, of positive transfer and emergent capabilities."

This shift is backed by technological progress across several dimensions:

Transformer architectures have opened entirely new possibilities for how robots process and act upon information from the physical world. Projects like Google's RT-X and RT-2 and TRI's work on General Navigation Models demonstrate the potential for end-to-end, general-purpose automation of dynamic physical interactions. These advances are particularly powerful in their ability to turn abstract concepts ("verbs") into context-specific actions – understanding, for instance, the crucial differences between opening a door and opening a phone.

  1. The hardware equation is rapidly shifting in favor of commoditization and widespread deployment. The emergence of cheaper, modular components across perception (cameras, radar, lidar), control (motors, actuators), and power systems is making the economics of cognitive robotics increasingly viable. Companies like Unitree are demonstrating how quickly hardware capabilities can advance when paired with improving intelligence layers. Perhaps more importantly, as these intelligence layers improve, robots can achieve more with simpler hardware configurations – a virtuous cycle that further improves deployment economics.

  2. Advances in computing infrastructure, both in cloud environments for heavy workloads and at the edge for real-world autonomy, have expanded the frontier of possible applications. This is complemented by breakthroughs in simulation, synthetic data generation, and cross-embodiment learning that promise to help robotics overcome its historical data scarcity challenges.

However, these tailwinds – and the ability for companies to defend technological advantages – are not evenly distributed across the value chain. For this reason, we believe the Smiling Curve is a useful framework for understanding where and how value will accrue in embodied AI.

In short, we see the most value flowing to i) foundation/world models that can generalize across tasks and embodiments and ii) specialized applications that can leverage these capabilities to solve high-value problems in complex domains. The traditional middle of the value chain – hardware manufacturing and systems integration – faces increasing pressure as intelligence becomes more important than mechanical sophistication. Similarly, data generation labeling, and processing will also face downward pressure as big tech companies with ample access to data seek to drive commoditization to benefit other parts of their business (in robotics and beyond).

This creates two paths through which we believe emerging companies have the biggest advantage in sustainably creating value.

Robotics Foundation Models

Robotics foundation models have the potential to be the operating systems and action layer for the physical environment, transforming commodity hardware into real-world agents.

For RFM companies, we see “data gravity” as a key to success – the ability to create self-reinforcing loops where model improvements drive adoption, which in turn generates more valuable training data. Unlike language models, which could draw on the vast corpus of human-generated text on the internet, robotics models face a fundamental data scarcity challenge. Outside of self-driving vehicles, no one has accumulated the volume of real-world interaction data needed to train truly general models.

This scarcity creates a unique strategic opportunity. A company that can solve the data acquisition challenges through strategic partnerships and deployment models will build powerful network effects. As their models improve, they become more valuable to hardware partners and application developers, generating more deployment opportunities and thus more data – a virtuous cycle that becomes increasingly difficult to replicate.

Vertical Robotics: Deep Integration and Domain Expertise

At the other end of the curve, we see compelling opportunities for companies that can deeply embed robotics capabilities into important workflows in critical industries. These companies succeed not through general-purpose intelligence, but through their ability to solve complex, high-value problems. 

We believe vertical robotics approaches are most valuable where:

  • The workflows governing interactions between robotics and operational systems are highly complex

  • Social dynamics and regulatory requirements favor trusted brands with deep domain expertise

  • The cost of failure is high, creating strong incentives to work with specialists

  • Domain-specific data creates compounding advantages that are difficult for generalists to replicate

Companies like Gecko Robotics (July portfolio company) in industrial inspection exemplify this approach. Their competitive advantage stems not from robotics capabilities alone, but from the domain-specific meaning they extract from collected data. This creates a different kind of data moat – one built around understanding the nuances and edge cases of specific applications rather than general-purpose interaction. It also creates a wedge to expand deeper into a customer’s operations, both via increasingly intelligent workflow tools and more advanced robotics solutions. In addition to inspection, categories like defense & security and construction represent prime areas for vertical solutions to create value. 

Vertical robotics opportunities also force us to consider whether emerging companies or incumbents are best placed to succeed. Despite the massive amounts of capital invested in recent periods in logistics and warehouse robotics, outcompeting Amazon, which has famously externalized many of its cost centers into massive businesses to the detriment of venture-backed competitors, is a tall order. Likewise, consumer distribution and brand advantages held by companies like Amazon and Meta place most new companies at a significant disadvantage.

The Interplay Between RFMs and Vertical Solutions

We also believe there is significant potential for interaction between companies at the two ends of the curve; e.g. Gecko integrating a model from Physical Intelligence. Vertical solution providers can become valuable data partners for foundation model platforms, providing real-world interaction data from high-value use cases. Foundation model platforms, in turn, can help vertical solutions expand their capabilities without massive R&D investment in core robotics intelligence.

July

Thesis

Frontline Audio and Video

Next-generation platforms that combine AI-powered language understanding with advanced audio-video capture are set to revolutionize frontline work by transforming raw field data into trusted, industry-wide operating systems.

December 2024

Industry Transformation

Only a few years ago, while touring the new maintenance facility of a publicly traded aerospace company, an executive pointed out several innovations: automated tool checkout, more advanced safety equipment, and a (physical) file room located closer to the operating floor than ever before. That this last feature was included is telling. The feedback loops between frontline action and data input are central to the operations of many industries – manufacturing, policing, and trade services of all varieties (from plumbing to solar installation). Key elements like pricing estimates, project timing, and resource requirements are often functions of what workers are observing in the field or on the factory floor. 

Despite comprising the majority of the global workforce, frontline workers have been largely left behind by technological transformation. The inefficiencies are stark: law enforcement officers spend up to four hours per shift on documentation, with 96% reporting these demands keep them from core duties. In 2021, nearly ¾ of frontline workers were still using paper forms. But workers are ready to adopt new solutions. In manufacturing 93% of workers believe software tools help them perform better and 96% would be willing to accept increased data monitoring in exchange for benefits like improved training and career development.

The convergence of several forces is creating an unprecedented opportunity to reshape frontline work and fundamentally change how operational knowledge is captured and leveraged. Advances in language understanding mean systems can now adapt to how workers naturally communicate, uncovering deeper context without forcing rigid input structures. Improved video processing and computer vision adds meaning to streaming footage, while ubiquitous mobile devices and sensors enable both active and passive capture (which also contributes to a safer –, hands-free, eyes-up – working environment) . The maturation of retrieval-augmented generation (RAG) technology makes it possible to connect this unstructured frontline data with existing knowledge bases – from maintenance manuals to captured tribal knowledge – creating powerful feedback loops between observation and action.

The winners in this space will build trust by solving acute pain points in documentation and training, then expand to become essential operating infrastructure for their target industries. We see distinct opportunities across market segments. For SMBs – independent trades (“America’s new millionaire class”), farms, medical practices – these solutions can function from day one as a sort of COO and assistant, both improving operations and increasing enterprise value by making tribal knowledge transferable in eventual exits. For larger companies with field forces, manufacturing operations, or driver fleets, these tools accelerate training, surface best practices, and build operational continuity.

In both cases, we believe frontline audio and video capture will serve as the data wedge to become the system of record and intelligence for entire operations. Winners will need vertical focus – the needs of a solar installer differ meaningfully from those of a manufacturer or farmer. Trust and deep industry understanding are critical, as these companies will increasingly look to serve as the automated action layer for their customers, with business models that reflect the value they create (i.e. outcome-based pricing). The platforms that successfully capture and leverage frontline insights won't just become systems of record for individual companies – they'll emerge as the operating systems for entire industries, fundamentally reshaping how skilled frontline work gets done.

July

Thesis

Outcome-Based Pricing

A dominant narrative around how economic models will shift in response to AI is that companies can now “sell the work

December 2024

Infrastructure Tech

A dominant narrative around how economic models will shift in response to AI is that companies can now “sell the work” – replacing seat-based pricing or subscription fees with models more directly tied to the value they create. This trend mirrors the evolution of digital advertising, where sophisticated attribution and optimization layers emerged to maximize and capture value.

Early evidence of this transformation is showing up in software verticals with well-defined (and highly measurable) workflows. In October, Intercom reported that 17% of recent software purchases included outcome-based pricing for their AI capabilities, up from 10% in the previous six months. One customer using Intercom’s “Fin” chatbot, RB2B, said the system autonomously resolved 60% of customer support tickets in August, saving 142 hours of human work. At $0.99 per resolution versus $10 for human handling, this represents both dramatic cost savings and a new pricing paradigm tied directly to outcomes.

As AI capabilities accelerate, we expect a rapid build-out of supporting infrastructure focused on enabling and capturing this value creation and cementing this new economic paradigm. The demand side is already primed – companies face increasing pressure to deploy AI in high-ROI use cases, knowing their competitors (or new AI-native entrants) will if they don't. 

This dynamic is driving the emergence of several distinct outcome-based business models:

  1. Full-stack players aiming to fundamentally reshape the economics of critical industries (particularly those resistant to new technology adoption) represent the purest path to AI-driven outcome-based pricing. Companies like KoBold in mining aren't simply delivering a point solution to an existing value chain – they are using AI to transform how value is created and captured across the entire workflow. In doing so, they take on the full risk/reward that comes with attempting to reorient the economic structure of a deeply entrenched system. Similar opportunities exist in healthcare, where AI-driven approaches could dramatically reduce cost structures while improving patient outcomes, and in commercial real estate, where end-to-end platforms can reshape everything from building operations to tenant experience to energy management.

  2. End-to-end workflow solutions in well-defined/quantitative areas like sales (Salesforce) or customer service (Intercom, Zendesk). Here, we believe emerging AI-native players face a significant uphill battle. Incumbents that cover multiple steps of a company’s workflows have data, distribution, and value attribution advantages while more companies are pursuing internal builds through "spontaneous software" tolling or by leveraging commodity infrastructure (LLMs) to develop custom solutions – as Klarna recently did to great fanfare and apparent success. The company’s OpenAI-powered chatbot is “doing the work of 700 people” as it handles ⅔ of the company’s service interactions. 

  3. Infrastructure players are emerging to accelerate the adoption of outcome-based business models for AI services. We see opportunities for new solutions to handle attribution (measuring AI's impact across complex workflows), market-making (matching AI capabilities to business problems while optimizing for ROI), and financial infrastructure (enabling novel pricing structures). The parallel to mobile advertising is particularly instructive – companies like AppLovin didn't just facilitate transactions, they fundamentally transformed how value was created and measured in their market. These infrastructure players won't just serve existing markets – similar to Stripe in software, they'll expand the opportunity by making it possible for new types of AI services to emerge and scale.

  4. We also expect to see the emergence of teams that develop superior "process power" in AI implementation. Similar to how some organizations mastered lean manufacturing or agile development, these teams will systematically identify industries where AI can collapse cost structures (while maintaining value delivered), rapidly prototype and deploy AI solutions that replace expensive, manual workflows, and build durable institutional knowledge about which AI approaches work best for specific business problems.

    One way of thinking about this opportunity is as a modern version of Rocket Internet or Thrasio, but instead of geographic arbitrage or aggregation plays, they'd specialize in AI-driven transformation of stagnant sectors via an integrated product and go-to-market engine that allows them to capture a commensurate share of the value they create in an ecosystem. Perhaps a more ambitious framing is that a new class of private equity giants will emerge from this paradigm of buying and improving software and service businesses with AI (i.e. modern Constellation Software). 

Unsurprisingly, we believe the most attractive opportunity lies not in incrementally improving existing services with AI, but in fundamentally reimagining how industries operate. This leads us to two areas specifically that we are most intrigued by:

  1. Infrastructure providers enabling more precise outcome measurement, verification, optimization, and value capture across the AI services economy.

  2. Full-stack players who combine AI capabilities with deep domain expertise to fundamentally transform industry economics.

July

Thesis

Infrastructure + Industrial OT Security

Protecting critical digital and physical assets

November 2024

Infrastructure Tech

The industrial landscape is undergoing a substantial transformation towards network-driven operations defined by a massive number of new connected devices and software-driven coordination.

In energy, distributed production and smart grids are replacing centralized power plants. EV charging networks, real-time traffic coordination programs, advanced telematics and autonomy systems, and distributed warehousing models are changing the footprint of how we move goods and people. Factories are bringing production equipment online for the first time, enabling dynamic condition monitoring and remote intervention. Advanced robotics are beginning to arrive en masse, increasing the importance of connectivity and software (data) in nearly every corner of the industrial economy.

Across much of this evolving landscape, security remains an afterthought. Major initiatives, like the White House's $7.5 billion Bipartisan Infrastructure Law to build a national network of EV chargers, focus on how new solutions will be deployed, not how they will be defended.

With nation-state conflict increasingly shifting to target critical private sector assets and infrastructure, the expanded attack surface exposed by this transformation is being exploited to devastating effect. High-profile attacks like 2017's WannaCry (North Korea) and NotPetya (Russia), 2021's Darkside attack on the Colonial Pipeline, and Volt Typhoon's (China) recent attacks on US water, energy, and transportation systems highlight the breadth of the challenges facing industrial companies.

In 2023, OT attacks on industrial assets rose by 19%, affecting over 500 physical sites across manufacturing, energy, transportation, and building automation systems. Companies like Clorox and Johnson Controls lost tens of millions of dollars because of OT attacks, while Applied Materials lost $250m in sales stemming from an attack on a supplier, MKS Instruments. Enterprise spending on OT cybersecurity will grow almost 70% from 2023 to $21.6 billion globally by 2028.

The Industrial OT market is defined largely by heterogeneity and legacy complexity. Unlike pure-play IT projects where enterprise deployments share common requirements, industrial OT projects are fragmented and bespoke thanks to physical constraints and hard-to-replace legacy systems.

Historically, customers made OT security procurement decisions in parallel with capital investments in new factories or machinery. While incumbents like Palo Alto Networks and Cisco have long seen the need and have moved aggressively to fill the gap, few companies have successfully shifted from services to recurring product revenue business models.

This is starting to change. As more modern, open hardware systems and sensors are deployed in industrial settings, technology companies can address larger portions of each customer's fleet of assets on day one. The convergence of IT and OT has helped to simplify the sales process. We believe this will spark a resurgence of interest for emerging OT security solutions and create the foundations for emerging companies,  capable of developing OEM-agnostic solutions that can be easily adopted and scaled across industries, to grow.


July

Thesis

European Public Safety Primes

With war on its doorstep and American attention shifting to the Pacific, Europe has been forced to prioritize resilience and strategic autonomy for the first time in a century.

November 2024

Industry Transformation

With war on its doorstep and American attention shifting to the Pacific, Europe has been forced to prioritize resilience and strategic autonomy for the first time in a century. But the threat to Europe’s way of life and future prosperity goes beyond front-line kinetic conflict. 

As the threat environment converges, the why now case for companies building for public safety and security in Europe, for Europe, gets stronger by the day. Migration pressures, cyber threats, and cross-border crime require capabilities that existing systems simply cannot deliver. Europe must invest more proactively in innovation across other public safety and security pillars: law enforcement, fire and disaster response, and intelligence.

Across markets, AI is driving a step change in our ability to understand and act upon the physical world. The convergence of AI with real-world data – cameras, drones, satellite imagery, and other sensory inputs – makes it possible to build an intelligence layer that processes complexity at an unprecedented scale. This is opening new frontiers in public safety and security. Companies that can harness, integrate, and analyze this explosion of data to drive faster, safer, and more accurate decisions stand to become category champions and play a key role in forming the foundations for Europe’s long-term growth and strategic autonomy. 

Across the world, advanced policing systems are delivering for forward-thinking law enforcement and border control agencies. Solutions like Flock solve over 700,000 crimes annually, making departments more efficient, while drones drive faster and safer responses. As resistance to US technology persists, expanding EU internal security budgets and increasing evidence that these systems work will push Europe to seek out homegrown solutions.  

Fire and disaster response, helping mitigate the €77b in annual losses from natural disasters and protect human livelihood, represents another critical opportunity area. New entrants combining predictive modeling of weather and the built environment with proactive intervention capabilities will capture value by closing Europe's critical response gaps.

Finally, intelligence agencies are approaching a breaking point: drowning in multi-modal data (from video to financial transactions) while inter-agency coordination fails. Companies that bridge European fragmentation while navigating privacy mandates will become essential infrastructure, enabling real-time response to physical and digital threats.

We see an opportunity for a new class of European "Public Safety Primes" to establish themselves in the market. The Axon playbook – now a $45b+ company built through methodical expansion by expanding from tasers to body cameras to a comprehensive digital evidence platform – shows what's possible. The company has effectively zero penetration in Europe, and local players like Reveal Media and Zepcom remain subscale. Winners will start narrow with a must-have product, earn trust through measurable impact, and expand across the public safety landscape as system-wide advantages compound.

July

Thesis

Composable Companies

Composable companies fuse enduring core expertise with agile, mission-focused teams to rapidly capture diverse market opportunities and redefine traditional business models.

November 2024

Infrastructure Tech

A new organizational model is emerging: the composable company - organizations that blend permanent infrastructure with fluid product capabilities. At their core, they maintain:

  • Capital and resource allocation expertise

  • Shared technical infrastructure

  • World-class talent

  • Possibly, Strategic customer and distribution relationships

By centralizing these unique capabilities, composable companies can swiftly identify, validate, and scale opportunities across its chosen markets. Around this foundation, teams can be rapidly assembled and reconfigured to pursue specific missions/product opportunities with various time scales.

This model excels in markets where opportunity spaces are in flu, and an organization needs to have flexibility and bandwidth to build out ideas that compound together around a coherent view of the future, but might find their manifestation in distinct products for distinct customers.

Recent developments in AI further enhance this model's viability by enabling more cost-effective creation of software and supporting customization for specific use cases:

  • Reducing software development costs

  • Streamlining maintenance requirements

  • Improving customer support efficiency

  • Enabling more cost-effective creation of AI tools

The Resulting Structure

The end product could be a holding company-style enterprise that combines:

  • The above-described core infrastructure

  • Multiple AI product and tools with varying scale and durability

This structure enables the efficient pursuit of numerous opportunities while maintaining the potential for further asymmetric returns from breakthrough successes among them or in aggre


July

Thesis

Marketplaces for AI-Enabled Services

AI-powered, asset-light platforms now empower creators and knowledge workers to build profitable one-person companies that disrupt traditional firms and democratize professional services.

October 2024

Infrastructure Tech

The Rise of One-Person Companies

The unbundling of the firm has been in flight for decades. As the internet enabled increased access to global labor markets, outsourcing to lower-cost countries exploded. The proliferation of cloud computing and mobile took this a step further, making it possible to externalize an increasing number of key operational functions and allowing for more asset-light business models. This prompted a thesis several years ago that the rise of “One Person Companies” remained an underrated opportunity. 

The next step in the evolution of the firm will build on this but will come at the problem from a different direction. It will be defined by the rise of One-Person Companies. Creators and knowledge workers will access external services that provide the capabilities to start and scale a firm and then re-bundle them in unique ways around their core skill set. They will monetize by selling products, services, and expertise to an owned audience that their core skill set has helped them build.

New platforms and infrastructure providers will emerge to support the tens of millions of individuals capable of building successful One-Person Companies along with the billions of consumers and businesses that will support them. More generally, the rise of the One Person Companies will inject dynamism into the broader economy and will play a role in driving more inclusive innovation.

AI – particularly agentic solutions capable of proactively understanding and executing end-to-end business workflows – represents the next leap in this evolution. As several investors and operators have observed, AI is empowering small groups more than ever before and new businesses across the economy (i.e. not just tech startups) are building from inception with AI literacy as a core competency. According to Gusto, roughly a fifth of businesses created last year said they were using generative A.I. to more efficiently carry out things like market research, contract reviews, bookkeeping, and job postings.

Current and Future Market Structure

In complex, non-commodity service categories like strategy consulting, law, investment banking, and wealth management – where key individuals inside of large companies often already “run their own book” – we believe these forces create the opportunity for further fragmentation; i.e the “creator economy-ization” of professional services.

A study cited in a 2015 Forbes article about million-dollar solo consulting businesses indicates this opportunity is not new. 

The U.S. Census Bureau found that of 3.2 million "nonemployer" professional services businesses in the U.S., there were 7,276 firms that brought in $1 million to 2.49 million in revenue in 2013, the most recent year for which statistics were available. And 321 superstars brought in $2.5 million to $4.99 million.

For the sake of simplicity throughout the document, we will refer to these companies as Service OPCs, though there is of course no reason why it must be a single person.

In practical terms, we believe we are entering a period where an even larger number of individuals or small teams with a differentiated skill set or value creation mechanism (product) will increasingly be able to leverage the marketplace (instead of “the firm) for distribution and operational capacity to build profitable and durable OPCs.

This thesis rests largely on the idea that some elements of human judgment are inherently non-scaleable / automatable (similar to our thesis around where value is captured in AI-driven content creation) and thus that the dynamics of the market will tend more towards specialization – thousands of small, profitable “winners” – rather than winner-take-all. 

A services Mittelstand rather than Silicon Valley concentration.

We are interested in what the agentic technologies that drive end-to-end workflow execution will look like and what the coordination mechanism across those autonomous services will be for Service OPCs. Without both of these things becoming a reality in parallel, the overhead of identifying and managing dozens of end-to-end AI agents (some of which will inherently be more end-to-end than others) while growing a client base and playing the most important role of delivering the differentiated service (even if some elements are made more efficient through automation) is likely enough to push promising OPCs back into the arms of McKinsey or Kirkland Ellis.

Effectively, we believe there is a Shopify-like opportunity to “arm the rebels” and build an ecosystem-level operating system for the AI-driven services transformation – combatting empire-building incumbents who view AI as a solidifier of their market positioning and what are sure to be dozens of overfunded venture-backed businesses promising to be “the next Goldman Sachs”.

Product and Technical Hypothesis

By engaging at the aggregation and coordination level, we are interested in answering the question of how a platform might “package” a business around an OPC’s core skill set to help it grow beyond its pre-AI agent potential. 

While we want to avoid being overly prescriptive in our analysis at such an early stage, we believe that for such a platform to represent a comprehensive – and attractive – alternative to the firm for Professional Service OPCs, it would possess some or all of the following characteristics (features), listed roughly in order to how they might be sequenced from a product perspective:

1. Functional Automation (Operational Capacity) – This pillar would serve as an "Agent Store," featuring both proprietary workflows and third-party end-to-end AI agent solutions. It would offer OPCs end-to-end functional agents for various business operations, such as:

  • Contract management

  • Financial management and forecasting

  • Compliance and risk assessment

  • Resource allocation and project management

  • Continuous learning and skill development

  • Marketing and public relations

  • Legal execution

It is also interesting to consider how such a store could provide a distribution channel for third-party developers of specialized AI solutions like Devin (for software development) or Harvey (for legal services) or the seemingly dozens of AI agent companies launching each week (a quick scan of the most recent YC class highlights how prevalent this model has become for early stage companies. 

These developers would be incentivized to use the platform due to its focus on going beyond simply offering access to agents but, helping OPCs “package” a specific set of agents around the skills and ambitions of the company, which brings us to the next pillar of the platform. 

2. Organizational Coordination (The AI COO) – The AI COO acts as the central nervous system of the OPC, ensuring all parts of the business work together seamlessly. Key functionalities include:

  • Automated integration between functional agents (the Bezos API Mandate on overdrive)

  • Workflow optimization across all business functions

  • Stakeholder communication management

  • Strategic decision support

  • Continuous improvement engine for business processes (i.e. vetting and implementing improved solutions or workflows autonomously). 

This pillar is critical in attracting and retaining both OPCs and third-party AI solution providers. For OPCs, it offers unprecedented operational efficiency and is the core enabler of leaving the firm behind for good. For AI solution developers, it ensures their tools are integrated effectively into the OPC's operations, maximizing utility and, long-term revenue potential.

With these three pillars working together, such a platform aims to create a robust ecosystem that not only supports individual OPCs but also fosters a network of AI solution providers. This symbiotic relationship between OPCs, the platform, and AI developers has the potential to drive rapid innovation cycles and expand the market in the same way Shopify has done in e-commerce for physical goods. 

Antithesis

While we have a reasonable degree of confidence that the end state of the unbundling of the firm will look something like what we have laid out above (“Shopify for professional services” is likely a primitive analogy for what we will have in 2050), there are several reasons to be wary of the thesis. Much of this hinges on market timing as well as the common question of whether this will enable truly novel business models to emerge that incumbents are structurally unable to compete with.

  • We may be underestimating incumbent entrenchment, particularly around trust and social signaling, and their ability to adapt. “Nobody got fired for hiring McKinsey, Goldman, etc.”. While not apparently on the operational level (yet), incumbent consulting firms have been among the biggest beneficiaries of the generative AI explosion. 

  • Regulatory, compliance, and legal structures may change slower than the technology,  Sectors like law and finance are heavily regulated. OPCs might face significant hurdles in meeting compliance requirements without the resources and infrastructure of larger firms, potentially limiting their ability to operate in certain (high-value) areas.

  • The complexity of integration (i.e. the AI COO) may be substantially more complex than we have described. The reality of seamlessly integrating multiple complex AI systems could be far more challenging and error-prone than expected, leading to inefficiencies or significant mistakes.

July

Thesis

Benchmarking the Physical World

Standards are the hidden force behind market efficiency, capital formation, and global trade.

October 2024

Infrastructure Tech

Standards are the hidden force behind market efficiency, capital formation, and global trade. From the meter to the shipping container, standards create the shared layer of trust that helps markets function and capital flow.

In 1860, at the outset of America’s railroad frenzy, Henry Varnum Poor published “History of Railroads and Canals in the United States”. This work was the first attempt to arm investors with data on the burgeoning industry and laid the foundations for what is now Standard & Poors — a $100b+ company with $11b in annual revenue. Alongside its long-lived triopoly counterparts, Moody’s and Fitch, it has persisted thanks to powerful standard-based moats that make their frameworks critical infrastructure for global capital markets.

“We think of ourselves as a benchmark company. I mean, data is in vogue now, and people are really kind of a bit obsessed with data and data companies… I think data is nice, it’s interesting. But if you could turn something into a benchmark, it really transcends data.”
SVP at S&P Global, November 2020

As Mark Rubenstein wrote in “The Business of Benchmarking”, universal standards are usually unassailable. The risk for companies that manufacture them is less that their moat is crossed and more that their castle becomes irrelevant. We believe the current geopolitical, economic, and technological moment is creating a once-in-a-generation opportunity to successfully counterposition and eventually (finally!) displace the global ratings and benchmarking oligopoly.

Several forces are converging to create this opportunity. First, Great Power Competition is fundamentally reshaping global trade and industrial activity. The push for energy independence, secure supply chains, and strategic autonomy is driving massive investments in decarbonization and reindustrialization. Reconfigured trade flows and industrial priorities demand new frameworks for understanding risk and opportunities. Second, the growth of sensor networks, connected devices, and geospatial systems has created unprecedented visibility into physical world operations and trade flows. This proliferation of data  – from factory floors to shipping lanes – provides granular, real-time insights that were previously impossible to capture. Finally, advances in AI and machine learning allow us to process and derive meaning from complex, multi-modal data at the scale and speed demanded of modern trade. 

We've seen the fundamental transformation of key commodity markets firsthand through our investment in KoBold Metals. Better collection and analysis of physical world data is revolutionizing resource discovery and development. Meanwhile, geopolitical machinations are accelerating the reconfiguration of global supply chains and trade routes, creating urgent demand for new frameworks to understand and price physical world assets. Traditional frameworks – built for a different era of global trade – are increasingly misaligned with markets that require real-time, granular insights to make decisions.

Success in this market isn't about attacking the incumbent oligopoly directly. Through counterpositioning, the opportunity lies in building for the new industrial economy with a model native to the speed and complexity of modern trade. Winners will start narrow, building density of data and trust in specific verticals, before sequencing alongside their customers' evolving needs to develop new pricing and risk infrastructure for the physical economy.

July

Thesis

Ports, Shipbuilding, and Marine Logistics Infrastructure

Vertically integrating our way to maritime parity

September 2024

Industry Transformation

Control over maritime infrastructure represents a significant geopolitical and economic advantage. China identified this importance early and is leveraging this reality, consolidating power and influence through strategic port investments. Today, China controls upwards of 90 major international ports, largely in the developing world. Meanwhile, the US doesn’t operate any major ports abroad. Europe also lacks a significant global presence and has ceded many critical assets to China

Shipbuilding deficiency goes hand in hand with the shifting balance of naval power. China’s shipbuilding capacity is orders of magnitude greater than the rest of the world combined. 

There is an imperative and an opportunity for Western innovation to serve as a counterweight, thereby rebalancing the scales of influence. However, the barriers to achieving this are significant. Union influences and regulatory constraints in the US and Europe have impeded Western innovation in maritime infrastructure, logistics, and production. Even with the will to counter China’s Belt and Road maritime strategy, we currently lack the process knowledge and technological capacity to compete effectively.

The importance of building more effective maritime infrastructure goes beyond geopolitical influence. Increased global trade, driven by standardization like the container ship, has had a transformational positive impact on global poverty. Ports and maritime infrastructure also have a critical role to play as catalysts for a decarbonized shipping industry and as sustainable energy and industrial hubs:

A core assumption is that the pathway to meaningful transformation can't be solely based on developing technology for the existing market. That is to say that selling software and automation into the existing market – whether to ports or shipyards – won’t be enough. Because of the regulatory roadblocks mentioned above, emerging companies have limited distribution power. 

Instead, we believe in a more integrated approach, in line with the view that a degree of vertical integration is often a necessary component of new industrial innovation. Companies building and operating ports will likely integrate back to vessel operations themselves. Shipbuilders will need to be ship designers. And so on. 

For example, we should explore the possibility of acquiring, building, owning, and/or operating ports and physical marine infrastructure. This has the potential to create a symbiotic relationship where we a company can be both the innovator and the “first and best customer”, allowing it rapidly test, iterate, and improve its solutions in a real-world setting. Here, we can look to companies like Resilience and their end-to-end biomanufacturing model (acquiring/retrofitting manufacturing sites, developing automation technologies) for inspiration. KoBold Metals is another, of course. 

The preliminary thesis, therefore, suggests that a strategic blend of technology development, port ownership/control, and operational excellence can provide a Western counterweight to China's marine infrastructure strategy and contribute to the development of a safer, more effective global trade system via increased automation, transparency, and standardization.


July

Thesis

Sustainable Construction

Construction is one of the world’s largest industries.

September 2024

Industry Transformation

Construction is one of the world’s largest industries. Global construction spending in 2023 amounted to some $13 trillion, 7% of global gross output. It is also one of the most unproductive sectors of the economy. Tight labor markets, regulatory complexity, and systemic fragmentation along with cultural inertia have contributed to stagnation and a lack of technological penetration

This ineffectiveness does not discriminate by project size or scope. While nearly everything we touch and consume is produced in mass quantities, factory-produced homes still make up a small percentage of the overall new housing stock. Meanwhile, 98% of mega-projects experience cost overruns of 30% or more, and 77% face delays exceeding 40%. The impacts on broader economic growth are significant. Had construction productivity matched that in manufacturing over the past 20 years, the world would be $1.6 trillion – 2% of GDP – richer each year. Increasing pressure to decarbonize places additional stress on the low-margin, change-resistant industry. Through both operations (28)% and materials/inputs (11%), buildings account for nearly 40% of global emissions.

These supply-side deficiencies come against a backdrop of rapidly expanding demand – by 2040, the industry needs to expand production capacity by 70%+. This is creating a desperate, and long overdue, search for answers that we believe can only be met by a combination of technological innovation and novel production and business system design. 

While prior attempts to transform construction – most notably Katerra – have failed, several factors are converging to create a more compelling why now moment. Novel materials like green steel and low-carbon cement are approaching commercial viability, while mass timber innovations make building faster and less wasteful – while delivering significant carbon sequestration. Construction robotics focused on autonomous assembly, logistics, and data capture can address the labor gap. Perhaps most importantly, advances in generative design and AI-powered collaboration tools can help target the small but critical coordination inefficiencies that have historically bottlenecked progress – precisely the type of system-wide improvements that Amdahl's Law suggests are essential for meaningful transformation.

We believe the companies that capitalize on this moment will do so through one of two models. The first is selective vertical integration – controlling critical capabilities in materials, design, and manufacturing, but executed with greater focus and patience than previous attempts. The second is a platform approach that centralizes key material and system design and standardizes interfaces between stakeholders while allowing specialized players to focus on their core competencies – similar to how semiconductor manufacturing evolved.

Both models recognize three essential elements that must work together: First, standardized approaches to next-generation materials that maximize both assembly efficiency and carbon benefits, from green steel to mass timber. Second, digital infrastructure that enables true system-wide optimization and seamless stakeholder coordination. Third, controlled manufacturing environments that bring automotive-style productivity to strategic components, whether owned directly or orchestrated through a network of partners.

July

Thesis

Contested Communication, Navigation, and Connectivity

Opportunities in the electromagnetic spectrum

May 2025

Infrastructure Tech

The electromagnetic spectrum has emerged as the primary battleground across defense, security, and commercial applications. As adversaries increasingly deploy sophisticated, yet inexpensive, jamming and spoofing capabilities, traditional communications and navigation infrastructure is struggling to keep pace. In Ukraine, Russian EW reduced GPS-guided artillery accuracy from ~70% to just 6%, while disrupting tactical communications across multiple frequency bands. In the Red Sea, Houthi interference has affected both ship navigation and satellite communications, showing that even non-state actors now possess substantial spectrum-denial capabilities. Electronic warfare now sits at the center of the conflict between India and Pakistan. 

As other technical components are commoditized via rapid hardware iteration cycles (i.e. “hardware at the speed of software”) for smaller systems and converging capabilities for larger platforms, the ability for militaries and infrastructure providers to defend against electromagnetic attacks is critical.

Over the last couple of years, several categories of capabilities have emerged around electronic warfare deterrence: (1) resilient communications leveraging frequency-hopping, mesh networking, and LPI/LPD waveforms; (2) alternative PNT solutions including distributed reference networks, vision-based positioning, and enhanced inertial systems; (3) spectrum situational awareness through distributed sensing and rapid electromagnetic analysis; and (4) operational resilience via multi-modal redundancy, AI-enabled adaptation, and distributed decision-making.

Current solutions are largely built by defense primes and legacy telecommunications companies. This market structure has resulted in fragmented, expensive solutions with limited civilian adoption and minimal cross-domain innovation. However, several catalysts are creating opportunities for new players, including the proliferation of software-defined hardware and sensors, autonomous systems requiring reliable communications and positioning, commercial technologies (like Starlink) proving resilient in conflict zones, and AI enabling new approaches to signal processing and spectrum management.

While the capabilities are clearly critical, we have questions about whether these standalone solutions will command premium pricing through sustainable differentiation or become capabilities integrated into larger platforms (with distribution power resting with the primes/hardware developers). The answer may vary by segment with specialized military applications supporting premium pricing due to certification requirements and mission-critical outcomes, while commercial applications may see contested operation capabilities become table stakes, with value accruing to system integrators and platforms.

While we generally remain skeptical about the emergence of platform companies (spanning defense and civilian domains) in this space given the challenges highlighted above and the overall fragmentation of the market in terms of requirements and use cases. However, we see a path for highly capital-efficient scale for companies that can radically collapse the cost structure, dramatically improve usability, and are set up for rapid adoption, thus avoiding some of the program-level integration that slow distribution and shift power to larger players.

July

Thesis

Whole Genome Sequencing

Driving down the cost of precision healthcare

April 2025

Infrastructure Tech

The dramatic decline in whole-genome sequencing costs—falling exponentially faster than Moore's Law—has positioned genomics as a foundational technology set to transform healthcare, science, and biotechnology. Historically, sequencing a single human genome cost billions during the Human Genome Project; today, that cost has plummeted toward a hundred dollars per genome, driven primarily by breakthroughs in sequencing hardware (e.g., Illumina, Oxford Nanopore, PacBio, Ultima Genomics), sophisticated software infrastructure (e.g., alignment algorithms, AI-driven variant calling like Google's DeepVariant), and cloud-based bioinformatics pipelines.

This infrastructure can be conceptualized as a clear technology stack: at the base, specialized sequencing hardware generates raw genomic data; above this sits the sequencing layer, transforming raw signals into digital sequences; followed by a dedicated analysis layer, interpreting this sequence data into actionable insights; finally, the application layer utilizes these insights to deliver tangible value across clinical diagnostics, personalized medicine, synthetic biology, agriculture, and consumer wellness.

As sequencing costs continue to decline, the implications expand dramatically. Routine genomic sequencing will transition from a specialized procedure into everyday clinical diagnostics, akin to common blood tests. Individuals could receive continuous genomic monitoring—tracking genetic mutations, predispositions to disease, or response to treatments over time. This ongoing genomic "surveillance" would enable unprecedented early disease detection and proactive healthcare management at scale.

Beyond routine diagnostics, ultra-cheap sequencing unlocks massive scientific opportunities. With the ability to economically sequence millions of genomes, researchers can accumulate vast genomic datasets. These datasets, previously inconceivable due to cost constraints, will empower machine learning models capable of uncovering complex biological patterns and causal relationships that currently elude scientific understanding. Consequently, genomics-driven AI has the potential to illuminate fundamental biological mechanisms, significantly accelerate drug discovery, and foster innovation in synthetic biology by providing comprehensive, real-world training data.

In essence, the falling genomic sequencing cost curve is not merely an economic curiosity—it represents an inflection point in biology, medicine, and biotechnology. By transforming genomic data into an accessible, ubiquitous commodity, society stands at the brink of a profound shift toward precision health and accelerated scientific discovery, powered by genomics at unprecedented scale.

Key Questions:

  • On a technology perspective: How likely is it that sequencing will become cheap enough for widespread adoption? What defines "cheap enough," and what kind of new markets or applications could be unlocked by this?

  • On stack opportunities: Which part of the genomics technology stack is most compelling for investment or innovation? If genomics today parallels the semiconductor industry of the 1970s, is the infrastructure (sequencing and hardware) layer particularly interesting?

  • On investability and winners: How investable is this space, and what factors will determine the winners? Ultima Genomics appears promising, but what competing technologies exist? What are the regulatory hurdles, and how significant is the risk that established players might leapfrog emerging innovators?

  • On the analysis layer: Is the analysis layer less attractive due to the prominence of cloud solutions, open-source software, and powerful existing platforms? Or are there niche opportunities still worth pursuing here?

  • On application layer: What consumer or end-user applications will emerge from cheaper genomics? Which areas (clinical diagnostics, consumer health, synthetic biology, agriculture) present the most exciting opportunities for transformative impact and investment?

July

Thesis

Job-Specific Report Writing

Removing unproductive "work around the work"

April 2025

Infrastructure Tech

Workers across industries face a massive documentation burden that diverts them from core functions, creating an enormous drag on productivity. Law enforcement officers spend up to 4 hours per shift on paperwork, physicians dedicate nearly half their day to EHR tasks, and social workers spend over 50% of their time on case notes versus only 20% with clients.

The convergence of multimodal AI, ubiquitous mobile devices, and improved computer vision creates an unprecedented opportunity to transform how operational knowledge is captured and acted upon. Advances in language understanding now enable systems to adapt to natural communication, while retrieval-augmented generation connects unstructured frontline data with existing knowledge bases.

We believe in many cases, audio and video will serve as the data wedge that gives AI reporting solutions their initial foothold, then expand to become the system of record for entire operations. These systems will evolve from documentation tools to platform-level operating systems for their industries.

We are particularly interested in segments where reporting is not "the work" itself but represents a significant drag on core functions: Law Enforcement, Healthcare, Field Services, and Social Services.

This opportunity extends beyond frontline work to professional services (where reporting often is “the work”). In asset-based lending, for example, AI can streamline coordination between companies preparing reports for lenders and specialists (compliance, ESG, risk), creating separate analyses, potentially providing a wedge into higher-value financial services.

Incumbents today offer legacy documentation systems not designed for multimodal, real-time work.

We think the ideal value chain integrates data capture (multimodal inputs), processing (AI models tuned to specific industries), automated report generation, system integration, and analytics. Early entrants typically focus on capture and generation, integrating with existing systems rather than replacing them. 

However, full-stack players will emerge over time who aim to become the primary system of record by starting with documentation and expanding into adjacent workflows.

Several forces are accelerating adoption: acute labor shortages make documentation inefficiency untenable; multimodal AI to process audio, video, and text with domain-specific understanding (and do it in real-time) are coming rapidly down the cost curve and up the utility curve; and the proliferation of smartphones, body cameras, and sensors provides data at minimal cost. Frontline workers show increasing receptiveness with 96% indicating willingness to accept increased data capture in exchange for reduced paperwork, creating strong user adoption dynamics alongside clear ROI.

We see the most compelling opportunities in categories where:

  • Documentation consumes 40%+ of professional time

  • Current processes remain paper-based or use outdated systems

  • Workflows involve multimodal data (voice, video, text) AI can now process

  • Regulatory requirements add complexity that AI can navigate

  • Solutions can evolve from point applications to systems of record and define novel workflows

The most promising approaches will be vertical-specific with deep domain expertise rather than horizontal platforms. Successful companies will build from multimodal data capture, prioritize trust and usability, and pursue a "land and expand" strategy.

There's also significant potential for more aligned pricing models. Rather than traditional SaaS subscriptions, outcome-based pricing (tied to time saved, compliance improvements, or capacity created) may better reflect the transformative impact and capture more value, particularly in budget-constrained industries.

In financial and professional services, report coordination itself can serve as a strategic wedge. By streamlining complex workflows around documentation, AI systems can position themselves to capture higher-value activities, moving from optimizing document flow to facilitating transactions or informing capital allocation decisions. While potentially less immediately scalable than frontline reporting, these opportunities could unlock substantial value where information coordination directly impacts financial outcomes.

July

Thesis

Construction Robotics

Creating value on the robotics smiling curve

April 2025

Industry Transformation

Construction represents one of the strongest and largest pools of opportunity on the "vertical application" side of the Robotics Smiling Curve

With construction sector productivity growth under 1% annually for decades and 98% of megaprojects experiencing 30%+ cost overruns, the industry has massively underperformed. However, we are witnessing a critical inflection point where converging technological advances, economic pressures, and industry shifts are converging to drive robotics adoption at unprecedented rates.

While we have often been drawn to full-stack models in various sectors – e.g. KoBold Metals – we are seeing adoption take off faster in construction with more specialized plays. Even though the IRR on full-stack projects seems very strong (we’ve heard 50%+ in some cases), there is substantial friction in the system pushing back on an overhaul of the entire construction process.

Specialized solutions create immediate, measurable ROI that doesn't require waiting for an entire project lifecycle to validate viability. Developers, contractors, and financiers can see results within days or weeks rather than months or years and are able to compartmentalize risk, dramatically lowering adoption barriers and accelerating market penetration.

Construction robotics sits firmly on the high-value end of our curve for several reasons:

  • Deep domain expertise is essential. Construction sites represent "high situational diversity" environments with unstructured, dynamic conditions that resist generic solutions.

  • Workflow integration creates defensibility. Success requires seamless integration into complex construction workflows across multiple trades and stakeholders.

  • Specialized data builds moats. Data from construction sites creates proprietary insights / faster feedback loops that generalist platforms cannot easily replicate.

  • The cost of failure drives specialist adoption. Financial consequences of mistakes make construction firms more likely to trust specialists with demonstrated domain expertise who can help avoid cascading delays/overruns.

The economics now pencil out thanks to a continued rise in labor costs and rapid decreases in robotics hardware costs (along with increased modularity). Meanwhile, technical barriers around computer vision and edge computing are being solved and the growing adoption of digital design tooling provides the foundation robots need for reliable operation.

As a result, we believe the overall mindset of the industry has shifted. While still resistant to the full stack / modular builder approaches that attempt to revolutionize the entire building process, construction robotics companies are finding the greatest success by addressing specific high-value tasks. We see promising applications across several categories → layout and site surveying (an extension of our thesis on data-driven infrastructure management), earthmoving and excavation (off-road autonomy), structural assembly interior finishing, and demolition and hazardous work.

This task-specific approach, paired with business models that reduce risk for contractors/partners, aligns perfectly with our vertical robotics thesis—solving discrete, high-ROI problems through deep domain expertise and workflow integration rather than attempting full automation at once.

Our initial view here is that 2025 marks a critical inflation point where construction robotic solutions are beginning to show that substantial productivity improvements in the sector are, in fact, possible even with challenging labor and supply chain environments. 


July

Thesis

Scaling the Skilled Physical Workforce

Breaking a critical reindustrialization bottleneck

March 2025

The skilled industrial workforce represents one of the most significant bottlenecks to economic growth and energy transition across Western economies. As reindustrialization accelerates, the gap between labor supply and demand is widening dramatically, threatening sovereign objectives and creating a risk of even greater economic instability.

Recent data highlights the severity of this challenge. According to a 2025 CapGemini report, 87% of organizations anticipate significant labor shortages as the workforce ages, potentially impeding reindustrialization efforts. The US Chamber of Commerce reports that 45% of manufacturing job openings went unfilled in 2023. Germany alone faces a projected shortage of 350,000+ skilled energy transition workers – a gap likely to expand as a result of the country’s massive fiscal stimulus package targeting infrastructure development. 

Our conversations with industrial leaders consistently reveal that talent, not technology, is their primary constraint. While full automation promises to someday eliminate this bottleneck, the immediate path forward lies in solutions that transform how industrial skills are acquired, deployed, and scaled. 

We see this massive reshuffling of the global industrial economy as a rare wedge for companies to solve acute and immediate labor market pain points that will position them to expand in various directions to become fundamental infrastructure for critical industries. We see three distinct approaches to capturing value in this emerging category:

  1. Technology-Enabled Labor Pool Expansion – Companies in this category use automation, prefabrication, and digital workflows to reduce skill requirements, effectively expanding the addressable labor pool. These businesses typically operate as end-to-end service providers, controlling both the customer experience and the production process. Companies like Reframe Systems in construction exemplify this approach, with prefabrication and factory-based assembly methods allowing less-skilled workers to build high-quality structures under controlled conditions.

  2. Operational Standardization – This approach focuses less on technological transformation and more on establishing systematic operational playbooks for traditionally fragmented services. Companies standardize workflows, quality control, and customer interactions to create trusted brands in high-variation service businesses like machining (Isembard), renovation (Block), and specialty trades (Craftwork). Their competitive advantage stems from superior training systems, project management, and quality assurance rather than fundamental technology innovation.

  3. System Orchestration Platforms – This model involves building the connective tissue between labor supply and market demand. These platforms start by solving acute pain points in workforce access and management or training for a specific sector, then expand to adjacent markets categories. The growth of Workrise, which evolved from its start as an oil and gas labor marketplace (RigUp) to a comprehensive vendor and workforce management system for the energy industry, exemplifies this growth pathway.

Across all three models, the acquisition and development of talent remains the critical success factor. Companies are innovating in both initial training—creating accelerated pathways for career-changers to enter skilled trades—and continuous upskilling. The approaches above, and the best practices we expect industrial and field service customers to adopt, are consistent with our research on AI-enabled constellations of experts and multimodal tools for frontline workers: lower the barriers to effectiveness by removing friction around data capture, accelerating feedback loops, and making master-level insight more easily accessible. 

Companies that successfully scale the industrial workforce will become the operational backbone of the energy and manufacturing renaissance while positioning themselves to drive, rather than be disrupted by, the inevitable wave of industrial automation. By building the talent, data, and process knowledge layers of industrial operations today, these companies establish themselves as the essential connective tissue that future automation systems will augment (and rely on) rather than replace.


July

Thesis

Monte Carlos for Real Life

Thoughts on "Moneyball for the Mundane"

February 2025

Fundamental Consumer

Alternative titles for this might be “Moneyball for the Mundane” or “Tim Urban’s Big Green Tree as an (AI-enabled) service” – and we have touched on similar concepts in our constellations of experts memo.

With better data about an individual – choices, preferences, biases – leading up to today, shouldn’t systems be able to guide us toward a set of actions that will unlock what we perceive to be the ideal green path going forward? 

Daniel Ek has said in the past that one of the formative decisions Spotify made was capturing every possible data point about every user from day zero. What seemed costly and wasteful to many at the outset became the foundation for Discover, one of the most successful behavioral engines ever built. 

It is common for people to lament the scale of data exhaust created in our day-to-day activities. Every financial transaction, location we visit, and conversation.

But what if we don’t have enough data about ourselves?

What if we should be capturing everything we possibly can – everything we see through Meta Ray-Ban lenses, every call and conversation, and every biometric indicator imaginable in the hope that someone builds Spotify Discover for the real world?

Now, this is dangerous territory, and the Black Mirror episodes write themselves – from humans losing agency, blindly following LLM guidance to a further flattening of culture as models “Moneyball” our life.

However, there's an important distinction here. Sports are finite games with clear rules and boundaries – they can be "solved" through optimization. This inevitably leads to convergence on ideal strategies. Life’s most important activities look a lot more like infinite games. Career paths, relationships, creative pursuits – these domains have no clear endpoint and near-limitless possible states. In these infinite spaces, better pattern recognition and decision support could actually unlock more unique paths, not fewer. 

Like with GPS, we see a future where people, supported by AI, possibly rely less on their own sense of direction while also making people way more willing to be spontaneous and take risks and drive new places because they know the metaphorical GPS will always help them find their way.  Moreover, these systems – millions of local optimization engines assisting individuals to find their most valuable paths and optimize their own unique game – might enhance rather than diminish human agency by illuminating options we'd never see on our own, creating more diversity of outcomes rather than conformity.

Many great businesses have already been built around the notion of data-driven, adaptive personal improvement – Oura is a $5b+ business, Noom is estimated to have done $1b in ARR in 2023, and Duolingo recently touched a $20b+ market cap.

We believe there are more levels to be unlocked. 

Subscription models have thus far proven to be a great, business model for finding the balance between data capture/use and user agency. With better upstream and downstream data, is an even better alignment of interests possible; e.g. via outcome-based models? While large platforms are the natural aggregators of the data that will power such solutions, is an ecosystem of hypers-personal services on top of this data possible thanks to advances in agent infrastructure and the rise of spontaneous software development? And what are the technical requirements required to make this a reality (e.g. advances in model memory)?

While we don't claim to know exactly what form these companies will take, we're convinced that better data capture and analysis will unlock new paths to human flourishing that are currently invisible to us. Just as Spotify's prescient bet on comprehensive data collection revealed music we'd love but never discover on our own, better systems for understanding ourselves might illuminate versions of our lives we'd never imagined possible. Perhaps the future isn't about optimizing toward a single ideal, but about expanding the landscape of possibilities for each individual. 

Our "Constellation of Experts" thesis points to one potential direction, but we suspect there are many more waiting to be discovered.


July

Thesis

Protein / Bio Dev Tools

Accelerating drug development

February 2025

Industry Transformation

Advancements in AI-driven and computational tools are unlocking protein development, making it more predictable and efficient. Much like software developers rely on specialized dev tools, modern protein engineering now benefits from:

  • Machine Learning (ML) (e.g., transformer models such as ESM and ProGen)

  • Generative AI (e.g., diffusion models and protein language models similar to GPT)

  • Structure Prediction Tools (e.g., DeepMind’s AlphaFold)

  • Automation (e.g., lab systems integrated into closed-loop workflows)

These innovations are shifting protein engineering from an experimental-first approach to a design-first discipline—accelerating the process, reducing costs, and increasing reliability. In effect, biology is gradually becoming a programmable engineering field.

Two Strategic Approaches for Companies

Companies developing these models can generally choose between two business strategies:

  1. Developing IP In-House or via Partnerships
    In this model, companies integrate advanced models into a full-stack pipeline—from sequence design and structure prediction to experimental validation. Working independently or closely with biotech or pharmaceutical partners generates proprietary intellectual property (IP) and novel, high-value protein candidates (e.g., unique therapies or industrial enzymes). This approach emphasizes full control over the design-build-test cycle, directly driving competitive advantages through exclusive IP.

  2. Providing Accessible Tools (including OS models) as a Service
    Alternatively, companies can offer these advanced models as stand-alone tools. By developing user-friendly interfaces, APIs, and analytics, they allow customers to access cutting-edge protein development technology without building or managing a full pipeline and the associated risk. This model reduces friction and cost, enabling a broader customer base to benefit from AI-driven insights. Moreover, serving a diverse array of clients can generate a continuous stream of training data, which in turn improves the model’s overall performance over time.

While we like to back full-stack companies—most notably our investment in KoBold Metals—and we’re confident that substantial full-stack winners will emerge in the AI space, we recognize that AI in Bio differs structurally from AI in mining. Factors such as the larger scale of funding, a rapidly shifting technology landscape, and the necessity for deep domain expertise create unique challenges. These differences make it harder for us to confidently pick a single full-stack winner in the AI in Bio space. Instead, our investment style, expertise, and risk profile are better suited to a tools-as-a-service approach. This model leverages modular, accessible AI solutions that can serve a wide range of customers without requiring the heavy capital and integration commitments typical of full-stack pipeline. As technology and not bioscience investors, our investment style, expertise, and risk profile are better suited to a tools-as-a-service approach. 

  • It should be relatively easier to win a customer with a protein development tool than by forming a full pipeline partnership. Full pipeline partnerships require significant commitment and capacity from both parties, and they tend to involve higher friction and cost. In contrast, a protein tool can be offered as a service, which is less resource-intensive for the customer. It is crucial that the tools/model is designed to be as low friction as possible and fit in with existing workflows/tech stack.

  • Additionally, by providing a tool rather than a full pipeline, a company can serve a broad range of customers with varied protein needs. This diversity allows the company to continuously collect and integrate training data from multiple sources, which can significantly enhance the overall model's performance over time. Serving many different customers not only spreads the risk but also offers early-mover advantages, leading to greater long-term success compared to focusing on individual, high-commitment pipeline partnerships.

  • However, it's critical that the data from these interactions is retained and used to continually improve the model. Without this feedback loop, the tool's advantage is lost. In summary, a tool-based approach reduces risk, costs, and friction while offering scalability and long-term benefits through improved model capabilities.

  • It is important to note that even if a company is pursuing a tool-model initially, as long as they build out lab wet capacity (which we deem crucial for training data generation, even for tool-as-a-service businesses), there is an optionality to later on develop its own IP.

  • Be able to do value/outcome-based pricing 

A critical challenge with this model is how to capture value effectively. Outcome-based pricing—where fees align with measurable benefits—has proven successful in industries with quantifiable spending and benefits (which can be in the USD 100Ms for a single protein). Protein development, with its clear performance metrics, is well suited to this model. By tying pricing to outcomes, companies can align incentives, reduce risk for customers.

July

Thesis

Crypto Agent Payment Infrastructure

Enabling agents to transact

January 2025

Industry Transformation

As AI agents advance in autonomy and complexity, their ability to perform financial transactions becomes increasingly vital. Equipping these agents with digital wallets allows them to engage in economic activity: buying services, paying for APIs, managing cloud compute budgets, or even shopping online for real-world goods (e.g., purchasing data from other agents, restocking supplies for a small business, triggering payments for subscription-based services, or managing micro-payments for IoT tasks). Integrating decentralized payment systems offers a promising pathway to give AI agents the financial autonomy necessary to operate fluidly within these diverse contexts, rather than relying on legacy payment infrastructure, which is often gatekept, human-centric, and geographically fragmented. 

Decentralized payment systems provide several benefits for AI agents. They enable rapid transactions, often processed in real-time, which is crucial for agents requiring prompt payment capabilities. The reduction of intermediaries in these systems lowers transaction costs, making microtransactions economically viable. Furthermore, cryptocurrencies facilitate borderless transactions, allowing AI agents to operate internationally without the complications of currency exchange or cross-border fees. The use of blockchain-based smart contracts also allows for programmable, self-executing agreements, enabling AI agents to autonomously execute payments when predefined conditions are met, thereby enhancing automation and reducing the need for human intervention.

At the same time, we appreciate that implementing decentralized payment capabilities for AI agents introduces several challenges that necessitate thoughtful product design and strategic solutions. Regulatory compliance is paramount, as decentralized systems must navigate complex legal landscapes, including AML and KYC, which can vary significantly across jurisdictions. To address this, products should incorporate robust compliance frameworks that adapt to regional legal requirements. Security is another critical concern. Ensuring data privacy and implementing strong encryption and authentication protocols can protect against unauthorized access and data breaches. Finally, a user-friendly interface is vital for human operators to monitor and manage AI agents' financial activities effectively. 

By addressing these challenges through thoughtful design and strategic implementation, we believe that there is an opportunity for products to effectively harness the benefits of decentralized AI payments, paving the way for more autonomous and efficient AI-driven economic interactions.


July

Thesis

LLM Application Deployment: Resilience and Optionality

Today, the deployment of generative AI solutions into the enterprise, particularly large companies, has started and often exceeds expectations.

January 2025

Infrastructure Tech

The generative AI era presents an interesting paradox – strong confidence in the directional arrow of technological progress (the ever-expanding and evolving LLM blast radius) coupled with significant uncertainty around the economic implications, both macro and micro. Today, the deployment of generative AI solutions into the enterprise, particularly large companies, has started and often exceeds expectations

At the same time, there is wide agreement that while early applications are driving positive ROI, most organizations face a significant change management problem in fully incorporating them into existing operational frameworks - “there are no AI-shaped holes lying around.”  For many enterprises and their executives, this has led to a “flight to trust,” with large consulting firms benefitting from enterprise appetite to utilize generative AI. This uncertainty around future enterprise workflows is also furthermore reflected in the observation that most AI startups that have found traction have done so with an anthropomorphized approach, “selling the work” in areas like legal support and accounting – essentially building an end-to-end AI replica of what customers have come to expect from human + software. 

While we think great business can be built there, this can’t be all. We believe that as organizations and society develop a better understanding of AI, we build core workflows around this new paradigm, constantly adapting existing organizational structures and coming up with entirely new ones. We broadly differentiate between resilience and optionality and believe that both areas provide opportunities for interesting models and platforms to emerge. 

Resilience focuses on enabling existing companies forced to adopt AI (in secure) and effective ways to stay competitive. As described above, these companies already have processes and employees. Both might have a hard time adopting. 

As with any complex system, we believe there is a unique opportunity by looking at the smallest unit in organizations - employees. While executives and consultants try to implement AI policies top-down, high-agency individuals (armed with ChatGPT, Claude, Cursor, and the latest tools) are constantly discovering productivity enhancements built around their idiosyncratic workflows, often utilizing these tools without explicit permission. We see an opportunity to push much of the work of identifying enterprise-specific best practices to these forward-thinking individuals and for a novel platform focused on this bottom-up approach to AI resilience to emerge.

In the process, such a platform could kill two birds with one stone. It provides a starting point for better data controls and security processes to manage risk while helping companies understand the financial implications (productivity improvements, cost structure changes, etc.) of continued AI deployment. 

Furthermore, monitoring and visibility in AI use by employees help enterprises gain insight into best practices (making AI fit into existing holes) that can be rolled out across the organization. The big opportunity that emerges from this wedge and model for enterprise trust building is that such a platform positions itself as we move toward a world of “spontaneous software” and possibly, “AI as the last employee” – similar to how Workday came to define ERP for the “digital transformation” era. 

Optionality focuses on building companies around novel organizational structures with a view to the upside, native to AI workflows and not possible before. 

This is an extension of what we previously wrote on “spontaneous software and SaaS on-demand”. In line with a recent post from Nustom that draws parallels from the autonomous vehicle market to propose the idea of a self-driving startup, we believe that there is a massive opportunity here for companies that operate like mobile game studios, making use of the reality that software is increasingly cheaper to write and startups cheaper to run with AI getting better and more capable at both. We expect these companies will excel at rapid experimentation and iteration, consistently positioning themselves ahead of the capability curve to try to catch lightning in a bottle (hits-driven) or/in combination with being long-tail driven with a large number of small cashflow-generating businesses under one roof. 

July

Thesis

Digital Olfaction

Some of AI's greatest opportunities lie in its application to understanding and transforming the physical world.

January 2025

Infrastructure Tech

Some of AI's greatest opportunities lie in its application to understanding and transforming the physical world. We believe in the potential of convolutional neural networks, GNNs, and transformers to help us deal with this complexity and make sense of the world in ways that we have not been able to (we internally call these "expansion" use cases). This theme runs through several of our investments, most notably KoBold Metals. 

We believe that digital olfaction and a better understanding of how molecules make us smell are among those areas. Scent, due to its complexity, is our least understood sense. Novel approaches to machine learning, such as GNNs, have been proven to cut through this complexity and beat the human nose at scent profile classification based on molecule structures. Osmo, the company at the forefront of this research, has proven that it can utilize this understanding to develop novel scents. It is reasonable to assume that this technology will enable the faster development of novel molecules at lower cost and at scale. 

In 2023, the global aroma chemicals market was valued at approximately USD 5.53 billion (unclear if this also includes produced chemicals vs. only IP). This market is essentially dominated by a few large players:  Givaudan, Firmenich, IFF, and Symrise. All these players are fully integrated, meaning they both develop scent molecules (IP) and manufacture them. It is unclear how much value is in the pure IP, but some tailwinds could favor the emergence of a novel AI-enabled player focused on novel IP. In 2023, a class action lawsuit was filed against major fragrance companies Givaudan, Firmenich, IFF, and Symrise. This followed multinational (Switzerland, Europe) antitrust investigations into suspected price-fixing by these companies initiated earlier the same year. Moreover, there is a marketable shift in the industry to focus on sustainable molecules that don’t require scarce resources and have no negative health effects. 

The ability of AI to generate such molecules that either have novel scent profiles or are similar to existing ones without having negative externalities (e.g., associated with production health) is likely a unique fit for these AI models. We expect that to create maximum value, such a model (or suite of models) would likely need to be capable of 1) the ability to model molecule interaction to create a whole scent, 2) an understanding of constraints (e.g., toxicity, costs) and 3) the ability to assess the producibility of these molecule sets at scale. 

Moreover, we see substantial potential for market expansion. Suppose these AI systems are capable of identifying, mapping, and predicting the behavior of scent molecules, given certain hardware advancements (essentially a chip capable of detecting, analyzing, and recreating scent) are made. In that case, several new application areas emerge: These can span from environmental monitoring to medical diagnostics, where AI can detect disease biomarkers through molecular analysis to consumer applications such as capturing, reproducing, and sharing scent online. White is hard to quantify, but it is reasonable to assume that there is substantial option value.

July

Thesis

Space Debris

The number of objects we are sending to space is growing exponentially.

January 2025

Infrastructure Tech

The number of objects we are sending to space is growing exponentially. Thanks to SpaceX, launch costs have fallen 80-90%. While it took nearly 60 years to put 2,000 satellites in orbit, we launched 3,000 in 2023 alone. 100k satellites are expected to launch by 2030, marking a further increase in the complexity of space operations. 

As old space assets deteriorate and more are launched, collisions are inevitable, particularly as a result of space debris. Satellites already make 100,000+ maneuvers per year for collision avoidance. Losses due to collisions in low earth orbit were estimated at ~ $100m in 2020. Since then, we have at least tripled the satellite population.

While responsiveness is improving (e.g. edge GPUs to enable on-board autonomy), hundreds of billions of dollars in assets are and will be exposed without i) precise monitoring, ii) proactive defense systems (beyond trying to move out of the way), and iii) adequate financial risk management (i.e. insurance models). 

While it is easy to forget amid the current space market momentum, the industry is still walking a fine line – something that seems to have motivated Elon’s all-in effort in the direction of Donald Trump’s election. As the nuclear industry has demonstrated, the public perception pendulum in highly sensitive industries can swing toward the negative (for decades) with only a small number of high-profile failures. Space debris is the type of problem that, left unchecked, poses a Three Mile Island-style risk for the industry. Also like most “waste” related problems, there is often not a strong enough incentive for any single actor to solve it until it is too late. 

The fragmented incentives and control mechanisms in addressing space debris are evident in the current regulatory frameworks. 

The United States is a patchwork of policies, with agencies like the FAA, FCC, and NASA each taking different approaches to limiting or removing space waste. Europe’s approach has been more comprehensive with the European Space Agency (ESA) developing the "Zero Debris Charter," aiming to prevent the generation of new orbital debris by 2030. As of October 2024, 110 countries or entities have joined the charter, and discussions are ongoing with major satellite operators for participation. 

Despite these initiatives, the absence of a binding international legal framework leads to a "tragedy of the commons" scenario, where individual actors may lack sufficient incentives to invest in debris mitigation (opting instead to accelerate commercial advances amid increasing competition, resulting in increased collective risk.

International cooperation around debris is also threatened by geopolitical posturing. Without better visibility and defense mechanisms, nation-states will always have plausible deniability around the destruction of important satellites and space infrastructure (“it wasn’t us, it was debris”). Since even 1mm fragments of debris can disable a satellite entirely, this is not too much of a logical leap.

We believe that solving the problem of space debris creates an interesting wedge for companies to eventually become critical infrastructure for space security and risk management.

July

Thesis

AI-enabled Business Planning

Giving organizations on steroids what Cisco called its ‘virtual close’ advantage more than 20 years ago. 

January 2025

Industry Transformation

The generative AI era presents an interesting paradox: strong confidence in the directional arrow of technological progress (the ever-expanding and evolving LLM blast radius) coupled with significant uncertainty around the macro- and microeconomic implications. Acknowledging this uncertainty, we expect three things to happen as we move toward a world of increased AI and agent usage by organizations and, possibly, a trend towards “AI as the last employee.”

  1. Data and information will be processed much faster, leading to real-time insights and decision support. 

  2.  The metabolic rate of organizations is set to go up as feedback loops from planning to action become faster.

  3. Organizations will face substantially different resource and capital allocation decisions. Both require an orchestration, planning, and decision layer purpose-built for these changing dynamics. 

All of the above requires an orchestration, planning, and decision layer purpose-built for and enabling these changing dynamics. As a result, we see an opportunity to build an AI-enabled business planning platform with substantial optionality to become an integral part of the roll-out and management of increasingly powerful AI systems. Giving organizations on steroids what Cisco called its ‘virtual close’ advantage more than 20 years ago. 


July

Thesis

Data-Driven Infrastructure Management

There is an opportunity for new players to emerge at the intersection of two historically distinct types of businesses: infrastructure inspection and architecture, engineering, and construction (AEC).

January 2025

Industry Transformation

One of our core thesis around AI in the physical world is that novel data generation can drive substantial value creation. Robotics, drones, and sensors are used for inspection to fit right in. Providing customers with high-value (and revenue-generating) inspection services enables unique data collection at scale. As a result, we believe there is an opportunity for new players to emerge at the intersection of two historically distinct types of businesses: infrastructure inspection and architecture, engineering, and construction (AEC). The inspection business generates the data that enables high-value AI-enabled services in the design, construction, and maintenance phases of a project. 

We are interested in investing in companies that have found a unique wedge into the market to build large sets of novel and proprietary data that enable a flywheel of higher-quality services. We believe that the category leader in this space can create an agnostic platform compatible with different robot types from various manufacturers to deliver an increasing range of such services without needing hardware development. 

More effectively managing critical infrastructure assets through technology-enabled inspection, dynamic monitoring, and proactive intervention represents a crucial lever in stabilizing risks presented by emerging security, climate, and energy challenges, promoting public health and safety, and driving more effective capital allocation across the public and private sectors. 

Every four years, the American Society of Civil Engineers (ASCE) releases the Report Card for America’s Infrastructure. It details the condition and performance of the nation’s infrastructure. Its most recent report, released in 2021, gave the United States a C- grade, highlighting a widening investment gap that the ASCE estimates will cost each American $3,300 per year by 2039 (USD 90B+ total). In the years since the report, pressure has increased thanks to challenges imposed by extreme weather events, substantial changes in the global energy mix, and an increasingly tenuous national security situation.

Private infrastructure, from energy plants to commercial shipping, is fighting against the challenges and economic losses associated with system outages. For example, a study by the Lawrence Berkeley National Laboratory and the U.S. Department of Energy estimated that power interruptions cost the U.S. economy about $44 billion annually.

Solving these problems at scale requires moving away from manual inspection and towards more scalable technology-enabled approaches. These are substantially safer and dramatically generate more data that can serve as the foundation for appreciably higher-quality decisions. 

At the same time, public and private asset owners are starting to realize that inspection and data collection ideally begin at the outset of large projects and during construction. That way, decisions can optimized, mistakes can be identified, and one has a digital foundation for future inspections.


July

Thesis

Cross-Cloud Cost Optimization

Moving beyond cost tracking

December 2024

Infrastructure Tech

The FinOps and cloud cost optimization space is rapidly evolving beyond simple cost tracking. We observe a few trends: 

  • FinOps now encompasses a broader scope, including SaaS management, AI workload optimization, hybrid cloud governance, and even on-prem IT cost control. Meanwhile, AI workloads have emerged as a significant cost driver, pushing FinOps to expand its focus. As enterprises invest in machine learning and generative AI, managing the cost of GPU-heavy training and inference processes is becoming a priority.

  • As cloud spending continues to rise, organizations are shifting from merely monitoring expenses to implementing proactive cost governance and automation strategies. A key driver of this transformation is AI-powered automation. Companies are no longer satisfied with dashboards that only report costs—they need real-time anomaly detection and hands-free cost optimization.  

  • Multi-cloud and hybrid cloud optimization are also gaining traction, as companies increasingly operate across AWS, GCP, Azure, and on-prem environments. Traditional cost tools built for a single cloud provider are no longer sufficient. This can go as far as intelligently placing workloads across clouds for optimal performance and cost efficiency. This shift highlights the growing demand for solutions that seamlessly manage costs in complex, multi-cloud environments.

In our view, all of them make it necessary for independent third-party tools to emerge vs. simply using the tools provided by cloud providers ( although they have been pushing hard to develop these). As of now, we are uncertain what the FinOps stack might look like, but essentially, we see two scenarios: 

  1. Full-stack providers that can cover all the bases outlined above. This could either be one of the emergent players or one fo the existing legacy players. 

  2. Different stacks. Primarily, one layer is for observation/aggregation with a lot of breadth, and one is for execution automation on the technical level.

July

Thesis

Precision Wellness

Better health outcomes—delivered at lower costs and with greater accessibility—are fundamental to economic growth and human flourishing.

December 2024

Fundamental Consumer

Better health outcomes—delivered at lower costs and with greater accessibility—are fundamental to economic growth and human flourishing. Preventative healthcare represents our largest lever to unlock better outcomes at scale. However, the centralized control, opaque incentives, and high friction that characterize today’s healthcare system hold back progress. It is not built with technological advancement in mind and fails to meet the standard of experiences consumers have elsewhere. 

As the prevailing model fails to evolve, a new paradigm— precision wellness—is emerging. This transformation mirrors the forces that transformed media, finance, and commerce by redistributing the power over the experience to individuals. From top-down institutional mandate to bottom-up iteration, from one-size-fits-all solutions to hyper-personalization, from controlled to in control.

The wellness-driven consumer is at the center of this shift. Motivated by the same “divine discontent” that has continuously sparked consumer innovation across the economy, their demands for scientific rigor and an elevated user experience are accelerating the growth of the precision wellness opportunity. 

  • The next phase of GLP-1 adoption, perhaps the most important catalyst of this overall opportunity, appears increasingly driven by consumer-centric companies; 

  • The vast array of cheap, passive sensors integrated into phones, watches, and headphones creates longitudinal data that was previously unavailable, while clinical-grade modalities on consumer devices build trust in health-focused technology and reorient expectations toward continuous, rather than episodic, monitoring and intervention; 

  • The "mainstreaming of biohacking" is evident in the adoption of CGM among non-diabetics, the growth in advanced biomarker testing, whole genome testing, full-body MRIs, and the increasing demand for personalized, science-driven health optimization protocols.

As more people experience the feedback loops of better health and deeper health understanding – for themselves and those around them – their engagement compounds. This flywheel effect, combined with traditional healthcare's eroding monopoly on trust and access, creates a strong why now for emerging companies capable of integrating science, technology, and brand building. 

We also recognize that precision wellness has a significant blast radius effect, with aggregators, especially Apple, at the center. Data gravity, vast resources, and an incentive to commoditize complementary solutions make it unwise to compete directly. Thus, we are most interested in companies building non-device-centric models for distributed discovery, diagnostics, and delivery. This includes:

  • Next-gen healthcare providers integrating novel diagnostics and data collection into full-service care delivery (going beyond simply digitizing traditional models).

  • Knowledge networks (content + community + coaching) that use personalized insights to help guide users through specific niches of their precision wellness journey, creating a layer of trust in a consumer area that can be overwhelming due to a low signal-to-noise ratio.  

  • Companies using biological insights, often via at-home testing modalities, as a wedge to build up proprietary data sources, trusted brands, and communities.

July

Thesis

“Scale as a Service” for the Bio-Industrial Economy

Over the past 25 years, the emergence of "scale-as-a-service" has powered multiple "invisible innovation" waves in the software world.

November 2024

Infrastructure Tech

Over the past 25 years, the emergence of "scale-as-a-service" has powered multiple "invisible innovation" waves in the software world. Infrastructure begets applications begets the need for more infrastructure, and so on. Platforms like AWS, Shopify, Stripe, and Twilio build scale on behalf of their customers in important but non-core, functions and enable access via API. Over time, emerging companies bundle infrastructure from various scale-as-a-service providers, making it possible to go bigger faster or target previously unaddressable niches. Thanks to the programmatic nature of interactions, scale-as-a-service solutions minimize coordination costs and maximize control, enabling precision execution that aligns with a company’s desired speed, scale, and budget.

As scientific breakthroughs make biology more programmable, the why now for Scale-as-a-Service models is approaching a tipping point – but with important nuance. While AI represents a powerful enabler of new product and process design, the reality of biological complexity means we first need better tools and data to model and manipulate processes. As Niko McCarty notes, even the most significant AI breakthrough, AlphaFold, reveals the gap between static prediction and biological reality. Scale-as-a-Service providers can help bridge this gap by industrializing high-quality, standardized data collection across the design and production process. A 2023 study of biomanufacturing bottlenecks found that companies consistently struggle with purification, continuous processing, analytics, and expertise gaps - all areas where specialized infrastructure providers can play a Shopify-like role.

Meanwhile, dominant macro trends like the energy transition and US-China competition are pushing companies and countries towards more sustainable and domestic production models. Half of the world’s largest companies are committed to net zero, “reshoring” continues to grow in popularity on earnings calls, and the Biden Administration has set targets like producing 30%+ of the US chemical demand via biomanufacturing pathways within 20 years.

While first-generation companies like Ginkgo and Zymergen have struggled massively, select second-generation companies like Solugen show signs of staying power. If (still a big if) these next-gen companies prove the economic viability of bioproduction, we expect to see several successful scale-as-a-service providers emerge. These companies will become foundational platforms for trillion-dollar industries like chemicals, materials, energy, agriculture, CPG, and food & ag where bioproduction remains pre-commercial scale. Like the internet, the invisible innovation waves created by this infrastructure application cycle may show that the market for bio-enabled solutions is larger and more diverse than we could have imagined a priori.

We expect most successful scale-as-a-service providers to start with asset-lite approaches. Expanding upon Chris Dixon's "come for the tool, stay for the network" insight, these companies will initially aggregate supply, demand, and attention through useful data and coordination tools. From there, they will evolve into market orchestrators, connecting buyers with sellers and unlocking new capital flows. Eventually, many will build out physical infrastructure at scale, becoming the operating systems of the bio-industrial economy.

July

Thesis

Prediction Markets

Prediction markets represent a fundamental transformation in how society aggregates and values information.

November 2024

Infrastructure Tech

Prediction markets represent a fundamental transformation in how society aggregates and values information. As trust in traditional institutions continues to erode, prediction markets will emerge as an efficient mechanism for pricing risk, surfacing truth, and reshaping decision-making across the economy.

Throughout the history of technology – particularly the internet – important platforms often begin in legal grey areas, where user demand outpaces regulatory frameworks. From eBay to Uber, Airbnb, and Spotify, the most impactful companies solve problems that existing systems cannot address – or, more precisely, where prevailing incentive structures baked into law by incumbents actively resist progress. 

While incumbent resistance will be significant, we believe there is an opening for new mechanisms of collective intelligence that align incentives toward accuracy and accountability.

This transformation aligns with our broader theses around the role of better data (often physical data) driving a shift toward more dynamic and precise information-centric business models. In the same way that pricing for digital tools is evolving from static per-seat licenses to value-based outcomes, prediction markets represent a step-function improvement in how we price and distribute information. Once people experience the power of real-time, market-driven signals – whether in election forecasting or project management – we see no going back to traditional polling or planning systems. Thus, we believe the long-term opportunity extends far beyond gambling or speculation – it's about fundamentally improving how societies and organizations make decisions and allocate resources. 

Amidst the “moment” prediction markets are having in the wake of the US presidential election, critics rightly point to fundamental challenges: the subsidies required to bootstrap liquidity in niche markets may prove prohibitively expensive, and many use cases beyond elections and sports could struggle to attract meaningful participation. While these are serious concerns, we believe they echo historical skepticism of other transformative markets. For example, at the outset of equity markets, stock trading was seen as gambling and was dominated by "bucket shops" where people placed bets on price movements without owning shares. Such activity was seen as zero-sum, manipulated, and socially destructive. Yet over time, infrastructure emerged to make securities trading safer and more accessible: mutual funds, for example, transformed speculation into investment, regulations built trust, and exchanges standardized trading.

A similar story played out in e-commerce. In the mid-1990s, conventional wisdom held that consumers would never trust online platforms with their credit card information. Amazon launched in 1995 to widespread skepticism, yet by creating infrastructure that built trust and reduced friction, e-commerce became not just accepted but essential. 

Our hypothesis is that we are in the 1995 - 2000 period for prediction markets – mass-market cultural awareness is growing and momentum is clear – but market penetration is little more than a blip in the overall picture. In the same way that mobile devices and social networks (among other things) provided the technological catalyst for deeper e-commerce penetration, we see AI (and AI agents) as a critical enabler of the next wave of prediction market growth. For example, by creating liquidity in thousands of micro-markets, AI has the potential to help users take more sophisticated portfolio approaches and contribute to a “utilization cascade” that shifts prediction markets from perceived gambling into new “standard” tooling for information discovery.

Success in this category will likely follow e-commerce's growth trajectory. While early adopters drove initial growth, widespread adoption required infrastructure that created trust and reduced friction. Today's prediction market leaders will similarly need to build both consumer-facing brands and backend capabilities. We expect to see an "Amazon of prediction markets" emerge – potentially Kalshi – that combines direct consumer reach with infrastructure that powers other platforms. This will enable an ecosystem of niche players targeting specific verticals or user segments.

A key question remains around where value ultimately gets captured. Just as e-commerce value accrued to new platforms, internet-native brands, and incumbents who successfully adapted (e.g. Walmart), prediction market infrastructure will create several winning archetypes. Beyond pure-play platforms like Polymarket, existing media and financial platforms that already own distribution – from ESPN to Bloomberg – could emerge as powerful players.

The opportunity extends far beyond any single vertical. By expanding the surface area of possible transactions, prediction markets could enable new types of information exchange that are hard to imagine a priori. Winners will start by dominating specific verticals where information asymmetries create clear value propositions (Books:Amazon::Elections:Kalshi), then expand into adjacent use cases as users become comfortable with the model. Those who can navigate the regulatory environment while building trusted brands will become essential infrastructure for the information economy.

July

Thesis

AI-driven SaaS Replacement

LLMs have started and will continue to bring down the costs of writing software.

November 2024

Infrastructure Tech

As we discussed in many other category thesis, we believe that in the AI era, many of the laws of physics that existed around technology and business models are changing and much has been written about the proclaimed ‘End of Software.’ The argument goes something like this. 

LLMs have started and will continue to bring down the costs of writing software. This leads to a world where software is increasingly created for N of 1 customers and will be easily mendable over time. Ideating and building (or prompting) this software will increasingly shift from developers to non-technical users. 

As software creation becomes cheap, this poses a challenge to traditional software companies whose core value proposition (development, maintenance, and hosting of software, customization, and customer support), business model, and moats are rooted in the ability to leverage initial investments into brands, standards, and free cash flow into features and app ecosystems that catered to a heterogeneous customer base with little incentive to go elsewhere due to switching costs. Switching becomes substantially more attractive if the ‘perfect,’ highly personalized software is the alternative. This fundamentally challenges the business model of these companies. 

With that established, the key question is what the new paradigm might look like. 

There is a vision that if LLMs and agents have access to all our data, software and interfaces will be generated in real-time, on demand, and only emerge when they are needed. Fully in the control of users (or their LLMs/agents), this software costs only as much as the computer required to build it. While this vision is undoubtedly appealing, there are a few missing pieces: 

For one, we assume that it will take some time for models to generate end-to-end software applications. Until this is possible, someone needs to be responsible for ensuring the software works. This is not only from a technical perspective but also from a usability perspective. Just because a feature can be easily built doesn’t mean it should be built. Until models can fully understand context (at which point it is questionable why there would be even a need for human-readable software), domain-specific product expertise will be required to build useful products for specific use cases. Moreover, customer support and the need for enterprise customers to want somebody to lean on when things go wrong will likely remain.  

As a result, we believe there is an opportunity to build a company here. This company will have certain features: 

  • Provide a platform that offers guidelines to non-technical users to create applications for their specific needs 

  • Have an in-house team of developers to guarantee that software is functional when LLMs fail 

  • Create a Platform / App Store-type thing that enables the developer to publish their applications and enable others to use them 

  • Data platform and SDKs that enable matching to good features either developed already or novel and easy integration of these features

  • Business Model: 

    • Initial Development - one-off 

    • (Data) Platform - ongoing 

    • Developer Platform / App Store - marketplace take rate

July

Thesis

AI-Driven CAE

Modern CAE's transformation combines AI and deep learning, drastically improving physical design efficiency, creating opportunities for new hardware-makers, and challenging established OEMs.

October 2024

Industry Transformation

Computer-aided engineering (CAE) tools are the backbone of modern product development. As a result, they underpin much of our modern physical environment. Today, several key catalysts are shifting the playing field and creating long-awaited opportunities for new physical design companies to emerge and scale. 

  1. There is immense potential to commercialize the growing body of relevant research.  Computer-aided engineering (CAE) traditionally utilizes numerical solvers to simulate and analyze complex engineering problems (e.g., Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), and Multibody Dynamics (MBD)). Depending on the complexity of the problem and the computational resources available, CAE simulations can take anywhere from a few minutes to several days or weeks.

    In recent years, there has been increasing research done in training deep-learning models on simulation data to create so-called deep-learning surrogates. Deep learning-based surrogate models are computational models that leverage deep neural networks to approximate complex physical systems or simulations, providing fast and efficient predictions while maintaining reasonable accuracy (i.e., run complex simulations in seconds). Methods include data-driven (e.g., GNNs, NOs, RNNs) and physics-driven (e.g., PINNs) deep learning and generative models. Technology maturation makes the opportunity ripe for the right team, with access to data, and the ability to learn from a constant feedback loop of testing these models against a verifiable physical reality to push the boundaries of these methods. Combining these into easy-to-use workflows can fundamentally change how engineering (simulation, in particular) is done at scale. 

    An example from research by McKinsey on this: "One company in the power-generation sector, for example, used the approach to optimize the design of large turbines for hydroelectric plants [...] the company reduced the engineering hours required to create a new turbine design by 50 percent and cut the end-to-end design process by a quarter. Better still, the approach generated turbines that were up to 0.4 percentage points more efficient than conventional designs". This is the type of upside that is necessary to get the attention of potential customers in the space. 

  2. We are in the early days of a hardware supercycle. The top-down push by Western economies to reindustrialize and redevelop production capacity in the name of national security, climate change mitigation, and economic growth has driven capital and talent toward physical world innovation. With role models like Tesla, SpaceX, and Anduril leading the charge, hundreds of well-funded aerospace, energy, manufacturing, robotics, and transportation companies are emerging with demands for a modern physical design and engineering stack. This increased competition is pushing incumbent OEMs to experiment with new solutions for the first time.  

  3. AI-native business models create a wedge for new players. A shift in business models has thus far marked the deployment of advanced AI into foundational industries – KoBold owns its own exploration assets while Isomorphic Labs shares in the economic upside of its IP. Similar value-based, consultative, and/or “forward deployed” approaches – a partner rather than software vendor relationship – could create the footing for new players to gain footing with large customers and expand over time, avoiding the all-or-nothing sales cycles that have long protected existing leaders. 

The combination of evolving customer demands, novel research, and new business models have formed the basis for an entirely new paradigm of computer-aided, AI-driven design and engineering tools. They are unlocking faster, cheaper feedback loops, shifting workflows from linear to parallel, and unlocking emergent use cases. This increases both speed and quality in a way incumbents struggle to defend against. 

July

Thesis

The Manufacturing Action Layer

As the cost of adding sensors to anything and everything in the manufacturing process has decreased significantly, the amount of data produced in the factory environment has exploded.

October 2024

Industry Transformation

The anticipated (and much-needed) manufacturing renaissance in the US and Europe – sparked by rising competition with China and a movement to invest in expanding domestic production capacity in the wake of pandemic-induced supply chain breakdowns is hampered by several deficiencies. Among these limiting factors is the challenge of turning vast amounts of disparate industrial data into actionable insights and execution that drive true productivity gains

As the cost of adding sensors to anything and everything in the manufacturing process has decreased significantly, the amount of data produced in the factory environment has exploded. However, conversations with people across the manufacturing landscape make it clear that the impact of production digitization continues to underperform expectations. 

More than a decade into the Industrial IoT wave, most data from the manufacturing process ends up – at best – brought together into static Excel files. And while platforms like Palantir’s AIP promise rapid transformation, the ground reality is that data from different systems continues to live only in the heads of individual operators – a critical risk in an industry with massive turnover and an aging workforce. The VP of Industry 4.0 at a ~ $5b market cap automotive supplier recently remarked that they still lack the visibility to know whether a machine is even running in a factory without calling someone on the ground.

Incumbent software offerings in manufacturing are often stitched together over years (even decades) of acquisitions and integrations, resulting in a mess of fragmentation technical debt, information silos, and process bottlenecks. 

Given this backdrop – and the macro tailwinds that will continue to drive demand for next-gen manufacturing solutions – our initial hypothesis is that there are two interesting angles of attack for new companies: 

  1. Modern industrial control and execution systems capable of aggregating data across modalities and business systems, automating mission-critical operation and production activities, and assuming responsibility (via outcome-based pricing models) for driving efficiencies.

  2. Software-defined manufacturers aiming to displace incumbent manufacturers entirely through more efficient end-to-end approaches in specific verticals/use cases. 

Both models face challenges. The “base rates” for selling impactful digital solutions to manufacturers are mediocre at best and the companies that have reached scale – platforms like Cognite, Palantir, and Samsara – have significant distribution advantages that must be overcome by more narrow emerging entrants. For the “full stack” players, the scale potential is clear but remains to be seen whether venture capital is the right financing tool (“CNC machines with startup branding” is how one person described one of the companies to us).

July

Thesis

Nuclear Supply Chain

Technology deployment cycles – from the railroads to the internet – have long been characterized by dramatic boom and bust cycles. The nuclear cycle is picking up pace.

September 2024

Industry Transformation

Working Thesis

Technology deployment cycles – from the railroads to the internet – have long been characterized by dramatic boom and bust cycles. And while overreliance on analogies is dangerous, these historical precedents provide one lens through which we can contextualize the current moment. 

Today, the AI boom is drawing parallels to the 1990s dot-com and telecom bubbles, with massive capital expenditures, promises of exponential growth, and early productivity gains. On the horizon, the potential of general-purpose robotics even resembles the iPhone-driven mobile revolution that followed almost a decade after the dot-com bust. 

But the differences between the two eras are equally striking. Today, incumbent technology companies possess more structural power over the ecosystem than 30 years ago, suggesting perhaps less overall volatility at the expense of dynamism – i.e. The Magnificant Seven serve as “hysteria dampeners” thanks to massive balance sheets and distribution networks. And while opportunism in AI is certainly present, the technical barriers to entry needed to build a competitive foundation model (and the pace of customer feedback) are substantially higher than building an ISP during the DotCom frenzy.

However, the most consequential difference between the two eras may be the central role of energy – namely the re-emergence of nuclear power – in today's AI boom, particularly with the backdrop of rising great power competition and the ever-present specter of climate change.

Unlike the telecom infrastructure of the dot-com period (and data centers in today's cycle), which serve singular purposes, the expansion of nuclear infrastructure addresses multiple critical challenges. First, it promises to play a significant role in solving for the energy intensity and reliability demands of AI data centers. This is a problem we are looking at from several angles – nuclear and other forms of production, efficiency (via our industrial action layer research), and finally via an exploration of better distribution and grid resilience technologies.  

Beyond serving AI data centers, nuclear power (along with the other categories highlighted), meets the vast need for clean baseload power to accelerate decarbonization and the push for increased energy security amidst heightened geopolitical risk. 

As a result, nuclear’s positioning as a one-size-fits-all solution to many of our most pressing concerns – and thus its (theoretical) resilience to the fortunes of any single macro factor – makes it an attractive “picks and shovels” opportunity perfectly in line with the three major economic supercycles of the next several decades (AI, climate, and national security) – provided the current momentum can overcome generations of cultural and political baggage.

This baggage is complex, equal parts social, economic, and regulatory with each reinforcing the other in a vicious cycle. High-profile accidents and proliferation risk have dominated the social consciousness for 40+ years. This, in turn, influences regulation which increases the red tape related to the development and deployment of safer, more effective modern solutions. As process knowledge is lost and talent leaves the industry, costs spiral even higher and we are left with the current state of affairs. 

Despite its history as a safe, clean, and cost-effective technology, nuclear costs have jumped 33% higher while the cost of new solar generation continues to plummet (over 90%). The latter is, to be clear, a positive – we are pro-energy abundance more than we are wedded to a single approach – and showcases the power of technological learning rates if unleashed. 

Current and Future Market Structure 

Today, the narrative – and the fundamentals – around advanced nuclear are finally shifting towards the positive across several dimensions.

  • Russia’s full-scale invasion of Ukraine provided a clear focal point for the importance of energy security and the role nuclear energy can play in decoupling from Russia. Between 2021 and 2022, the percentage of Europeans at least somewhat in favor of nuclear energy jumped substantially – in Germany from 29% to 44%, in France from 45% to 55%, and in the UK from 39% to 53%.

  • Energy has become the core bottleneck to scaling AI. While GPU scarcity dominated the preceding couple of years, everyone from Mark Zuckerberg to Sam Altman believes the next substantial step forward requires energy-related breakthroughs. As a result, big tech companies have become the “urgent buyers” necessary to drive market transformation. Microsoft’s actions signal a clear belief in the long-term need to expand nuclear capacity. Its recent 20-year power purchase agreement with Constellation, which will revive the Three Mile Island facility, is as symbolically important as it is economically.   

  • The capital markets are responding swiftly to this step change in demand, with financial institutions including BofA, Morgan Stanley, and Goldman backing a COP28 goal of tripling nuclear capacity by 2050. The commitment to triple capacity also came with support from the US, UK, Japan, Sweden, and the United Arab Emirates.

  • Regulatory support has not been limited to COP commitments. In the United States, for example, President Biden recently signed into law the ADVANCE Act, which aims to streamline licensing, promote R&D investment, and contribute to workforce development.

  • This follows on the heels of (tepid) progress on the deployment front. In the United States, Vogtle 3 and 4 are finally online, years late and billions over budget. Still, the finalized designs, established supply chains, and trained workforce should contribute to less expensive future deployment. This summer, TerraPower began construction on its advanced reactor facility in Wyoming. Meanwhile, SMR certification is building momentum in France as companies like NewCleo and Jimmy Energy look to deploy first-of-a-kind reactors by 2030.  

  • Finally, the characteristics of SMR and AMRs coupled with the aforementioned demand dynamics have ignited venture interest in the space. Smaller form factors that can be deployed more flexibly and with a broader range of financing options have eased some concerns about the venture scalability of such projects. As a result, dozens of companies have been funded in recent years. Today, over 100 SMR and AMR designs are being developed at various stages and with different timelines across the world. 

Key Early Assumptions / Potential Catalysts 

The improved environment around nuclear power leads us to a few critical questions, based on important assumptions about where scarcity and abundance sit in the nuclear value chain.

  • Assumption 1 → The timelines for most advanced nuclear projects are at least 7+ years out, likely longer if history is a guide. This may not align with our time horizon unless we can identify intermediate value inflection steps that create opportunities for exit, etc. similar to the life sciences ramp-up.

  • Assumption 2 → The crowded nature of SMR/AMR technology development (abundant capital and attention at that part of the nuclear value chain) and the lack of a clear set of winners should push us to areas of relative scarcity where solving key bottlenecks can accelerate the market overall (i.e. turning those hundreds of companies into customers immediately).

So where is there scarcity in the market that aligns with our investment horizon and approach? Three areas stand out initially, with each meriting a separate deeper dive should conversations with experts, operators, and founders push us in a certain direction. 

Fuel → Russian (and increasingly Chinese) dominance of key fuels and processing steps risks putting the West in a vulnerable position, which echoes the overreliance on cheap gas that contributed to the Ukraine invasion and the European energy crisis. Significant efforts are underway to de-risk the nuclear fuel supply chain. The US Congress passed a bill to limit the import of Russian uranium, while the “Saparro 5” (Canada, Japan, France, the UK, and the US) announced plans to jointly invest over $4b to boost enrichment and conversion capacity over the next three years. 

The biggest risk to decoupling from Russia has been HALEU (high-assay low-enriched uranium), which advanced reactors are being developed to run. Until the Russian invasion of Ukraine, Russia was the only place with a plant licensed to produce this material. Companies like Centrus Energy and a still-under-the-radar Founders Fund-backed startup are targeting this bottleneck, which could be an important enabler of the broader advanced nuclear space. 

Project Development → Over the last two years, much of my work has centered on how to best help “emerging industrial” companies scale more effectively. While my early assumptions were largely centered on the need for new financing models, the critical bottleneck to deploying new projects across energy, materials, and manufacturing often turned out to be capable of on-the-ground project development and execution. Given the deterioration of the nuclear talent base across most Western countries, this problem is even more severe. 

A key problem with effective (i.e. on time, on budget) project development is the fragmentation of the subcontractors needed to build a project end-to-end. Companies are aiming to solve this through a reactor-agnostic platform for nuclear project execution. Through a comprehensive oversight strategy, which includes taking direct control over supply chain management, sourcing, workforce coordination, and financing required for constructing power plant fleets the company hopes to do for nuclear what SpaceX did for launch. Others are building fully modular, factory-made systems innovating on the delivery and development model rather than at the reactor level. 

Waste → Waste remains perhaps the most politically and socially charged aspect of nuclear energy, leading to decades of information warfare despite the relatively boring nature of the problem. Historically, the commercial incentives to store waste weren’t particularly attractive, making it a largely political endeavor. 

Today, countries and companies around the world are starting to see opportunities to turn waste from a burden into an opportunity through recycling.

July

Thesis

AI-enabled Services

We see an interesting opportunity in the context of end-to-end AI service providers. 

September 2024

Industry Transformation

We see an interesting opportunity in the context of end-to-end AI service providers. 

We believe that in certain cases, AI sold as as SaaS product can neither unlock its full potential nor allow developers to capture the value they are creating. This has a few reasons:

  • The limited reliability and lack of guaranteed perfect performance of AI models have led to their positioning as co-pilots rather than end-to-end task performers. A few use cases aside (e.g., coding), we don’t see such end-to-end task performers coming to market anytime soon. This means that successful deployment depends on adoption by a customer’s workforce. Naturally, this makes the ROI of these systems hard to measure and is paired with a sentiment shift that the productivity increases associated with those systems might have been overhyped. The fact that an intangible ROI is running against a very tangible cost of inference for developers does not make this any easier.

  • In a co-pilot world, breaking out becomes even harder for emerging companies. They have a structural disadvantage over established companies that can easily develop and distribute a co-pilot to their existing customers as part of their platforms. This is especially tragic for emerging companies because they require feedback loops and data to improve their algorithms. Without this, they inevitably fall behind the competition in terms of technical capabilities.

  • Pricing models that work in the context of software (e.g., seat-based) don't work in the context of AI, as the focus is often on productivity gains (i.e., getting more done with fewer seats). Therefore, there is a need for value-based pricing.

As a result, we see an interesting opportunity in the context of end-to-end AI service providers. These companies focus on one specific job and guarantee the successful execution and delivery of that job. The way these businesses will look internally is that they will utilize AI as much as possible but have a high-quality domain expert who can jump in if there are issues to ensure successful job delivery. Over time, these companies accumulate substantial proprietary data from “end-to-end” feedback loops of delivering jobs. This holistic approach puts these companies in a unique position to develop best-in-class models for a specific use case, leading to increased automation. In the long-term, the COGS of these businesses will converge toward the cost of computing.

In a lot of industries, professionals are either already using extremely sticky software or the software as it is offered doesn’t fit in the specific workflows (it is reasonable to assume that capitalism has led to every software niche being explored in the past 15 or so years). As mentioned above, many of the companies that have successfully acquired users at scale are already planning to roll out co-pilots as features of their products. For an AI-native company to build out a software suite and spend substantial amounts on customer acquisition is likely not the best use of resources.

This is circumvented by offering the delivery of an entire job and delivering results that are compatible with the legacy software customers might be using. Over time, these companies might decide to build out or buy software platforms on top of their AI-enabled service, interlocking themselves with their customer's processes and generating meaningful lock-in (Nvidia software stack, Marvell IP portfolio).

In the cases where conditions for AI-enabled service to emerge exist (see criteria below), we see this as having potentially three effects on market structure:

  1. Consolidation: Some industries may see a consolidation where AI enables a few large players to dominate by integrating and scaling more effectively than before.

  2. Maintained Concentration: In other industries, concentration may remain the same, but new AI-enabled companies could compete head-to-head with existing large players, potentially reaching similar sizes and profitability.

  3. Fragmentation: Certain industries might experience fragmentation, where AI enables many smaller players to operate independently. This could lead to a need for new platforms or aggregators to organize and leverage these fragmented services.

We think the most interesting venture cases will emerge in the context of 1. consolidation and 2. maintained concentration. In the context of 3, it is interesting to explore the next-order effect of this, (see Marketplace for AI-enabled Services and Products)

Independent of the structural market outcomes, the occurrence of AI-enabled service businesses requires specific conditions to emerge and thrive. We differentiate two types of requirements.

First, necessary conditions are always necessary for these businesses to pursue opportunities. Many of these opportunities, however, may become commoditized, leading to numerous profitable but modestly sized businesses, typically in the $10 million revenue range (i.e., fragmentation).

Therefore, for market outcomes 1. and 2. and venture-scale outcomes to occur, opportunities must have the potential for significant competitive advantages, or "moats." These moat conditions are likely present in only a small subset of AI-enabled opportunities.

Our primary objective is to identify and focus on the most promising categories where both the necessary and moat conditions are met. These categories represent the most attractive opportunities for substantial growth and success in the AI-enabled service sector.

Necessary Conditions

  • Objectifiable Outcomes: Objectifiability in outcomes is crucial to 1) training the models and 2) lowering transaction costs with customers. 

  • Easy to Hand Off from/to customer: Easy hand-off is critical to lower transaction costs and make sure the business can scale without integration work, etc. 

  • Technology Maturity: Technology utilized to deliver services needs to be sufficiently mature, or there needs to be a clear path for technology to be mature. In the beginning, human labor supporting the delivery of services is fine, but there needs to be a clear path to attractive unit economics with close to 90% automation. 

  • Value-based Pricing Possible: The thesis depends on a company's participation in the upside of what is 1) generated at a lower cost or 2) made possible due to a new service. It is critical for the economics that the service provider can sufficiently capture the value generated to ensure top-notch margins that are improving as the technology matures. 

Moat Conditions (at least one or two of them need to be true)

  • Enabling Technology Moats: Using off-the-shelf LLMs will not lead to sustained technology moats. Technology moats will emerge where the quality of the service offered relies on developing and integrating with other types of ML / AI, which will lead to some initial barriers to entry from a technical perspective.

  • Improvements in Technology from Feedback Loops: Building on the previous point, ​​another source of possible moat is that technology improves through feedback loops, as services are delivered to customers and lessons are learned. This means that the market leader can improve its technology fastest, leading to winner-takes-most situations. 

  • Sustained Unique Customer Access: Efficient customer acquisition for a lot of these businesses will be key to delivering top-notch profitability margins in the long term. Those categories where certain types of companies have unique access to customers (e.g., due to industry pedigree) are likely to be attractive. Especially when paired with the technology feedback loop condition outlined. 

  • Liability Underwriting: The assumption is that many of these service businesses will have to take liability for correctly delivering these services. Suppose liability is a crucial aspect of the offering. In that case, the amount of risk that can be taken in this context is a function of the cash a company has on its balance sheet to serve as a buffer for potential failures and can, therefore, be more aggressive. 

  • Regulatory Moat: Benefit from licensure requirements and other regulatory hurdles, which act as a natural barrier to entry and provide a stamp of trust and credibility. However, it is unclear whether this is actually the case. Lawyers require licenses, but barriers to entry are fairly low and based on academic performance. If the underlying models are similar, won’t everybody or nobody get approved? 

  • Brand / Trust: Services businesses inherently are built on a high degree of trust and brand recognition. This trust and brand enable customers to know that somebody can be held accountable and their choice isn't questioned by their bosses or investors (e.g., hiring McK, BCG, or top 4). It is likely that the same dynamics play out here and that this can be a source of sustained competitive advantage.

July

Thesis

The "Stranded Asset Exploration Company"

Building the KoBold of Remediation

May 2025

Environmental remediation is a large, lucrative market defined by fragmentation and dominated by a service-heavy model (see appendix). To date, the market, which by some estimates delivers $20b+ in annual revenue, has largely resisted productization and technologically-forward approaches. 

We believe change here is inevitable and that a combination of technological momentum and novel business and financing models is creating substantial company-building opportunities for new players.

Today, there are over 450,000 brownfield sites in the US and nearly 3 million suspected contaminated sites in Europe. As a result, over $3 trillion in real estate value is essentially stranded. Traditional cleanup approaches are slow and costly, with the average federal cleanup taking 12-15 years, thanks to substantial regulatory overhead, labor constraints, and limited automation. 

Having had a front row seat to the way KoBold has grown, we see potential for a similarly data-driven business model that i) identifies high-value contaminated sites, ii) applies superior remediation technologies (via advanced chemistry or otherwise), and iii) captures the value differential between pre- and post-remediation states through, for example, real estate development or the redevelopment or repurposing of stranded assets (energy, resources, industrial equipment).

The why now is backed up by technological convergence that is helping to change the economics of brownfield transformation. Advanced sensing can precisely map contamination without extensive manual sampling, while innovative treatment methods can destroy persistent pollutants in days rather than years. Industrial robotics is increasingly becoming viable, which is particularly important in a market where labor constraints are substantial and worsening (thanks to an aging workforce and little infrastructure for backfilling the gap).

We imagine the ideal business model and capital stack will resemble KoBold’s hybrid approach: venture capital for technology development paired with infrastructure-like financing for asset acquisition and development. This bridges fast-moving tech-enabled discovery with slower land and asset development cycles while allowing value capture from both services and asset appreciation.

We see remediation as a wedge to a broader opportunity, which we have referred to in the past as a “stranded asset exploration company”. Companies will likely start by focusing narrowly on specific geographies, pollutants, or site types (mines, agriculture, waterfronts, etc.) before expanding.

PFAS in dirt or water, for example, represents perhaps the most immediately addressable and culturally salient initial opportunity areas. However, there are a wide variety of potential starting points. 

  • Urban brownfields in high-demand real estate markets

  • PFAS-contaminated military bases and airports

  • Former manufacturing facilities with prime locations

  • Mine tailings containing recoverable metals

  • Waterfront properties with sediment contamination

  • Agricultural land impacted by contaminants near expanding suburbs

  • Contaminated aquifers in water-stressed regions

Companies like Valinor and biotech precursors like BridgeBio present another structural model to consider for such a company. Our understanding of how the “go to market” model works in remediation, particularly around remediation of government-owned and monitored sites, is that it is highly relationship-based, even more so than other government procurement processes.

Thus, if a company can centralize those relationships and the playbook for winning remediation mandates, they can scale that across a range of vertical focuses that can be staffed and financed in ways that are right-sized for a given opportunity. There are also centralizable technical capabilities that can be developed, from advanced chemistry to robotics.

There is substantial policy uncertainty surrounding the opportunity. The EPA and the environmental market overall are hot-button political issues where resourcing could be shifted or shut down with little warning. However, we think the amount of trapped value in the form of real estate, industrial equipment, and resources (minerals, energy, etc.) that can be unlocked through better data and automation will make this opportunity resilient against policy shifts. 

July

Thesis

Scientific Knowledge Acceleration

Building the 21st Century "Max Planck Club"

April 2025

Infrastructure Tech

One way to think about the slowdown in scientific discovery is that it is operating at the wrong level of resolution. Our current research ecosystem – constrained by peer review, “publish or perish” productivity incentives, and hyper-specialization – has systematically dismantled the scientific freedom that enabled the researchers behind the 20th century's most transformative discoveries – what Donald Braben termed the “Max Planck Club”.

This creates massive opportunity costs to society and signals a significant company-building opportunity. 

We see two emerging pathways to recalibrate this resolution:

  1. Individual Amplification: Tools that enable exceptional researchers to transcend institutional constraints and operate like 1-person research institutes. AI-augmented scientific workflows could dramatically expand what a single scientist can accomplish across literature review, experiment design, and analysis

  2. Mass Collaboration: Platforms that enable unprecedented collective intelligence by breaking down information silos between disciplines and creating new coordination mechanisms for distributed discovery.

We are most interested in pathways focused on one or more key levers below:

Funding & Incentive Redesign: Models that reward researcher autonomy, risk-taking, and open collaboration rather than incremental, pre-specified outcomes. This includes both traditional capital allocation mechanisms and novel approaches like prediction markets for scientific outcomes.

Human-AI Scientific Workflows: Tools that seamlessly combine what AI is good at (e.g., literature review, document creation, etc.) with human scientific intuition (e.g., better questions), 

Proprietary Data Assets: Platforms that generate unique, structured scientific data through novel experimental methods or researcher collaboration.

While many of the hybrid non-profit/for-profit research institutions that emerged in 2021-22 (the various attempts at building OpenAI for Chemistry or Biology) have failed to gain traction, we remain interested in thoughtful full-stack approaches, particularly given the capital availability that now exists again such “compound” company building models. The most compelling version might integrate all these elements—new funding models, workflow tools, and collaboration platforms—into coherent systems that reshape how science is produced and validated.

While we are already invested in Sakana and recognize that many solutions may fit well within the attention radius (if not the current technical blast radius) of large AI labs, given the philosophical proximity of science and general intelligence, we also believe the why now argument is compelling as trust in academic insutitutions is at an all time low and the geopolitical necessity of scientific breakthroughs at a generational high-point.


July

Thesis

AI-Driven Industrial Design

The rapid advancement of AI in design is paving the way for entirely new workflows in industrial design and CAD

April 2025

The evolution of Computer-Aided Design (CAD) and rendering tools has significantly transformed design and manufacturing since the 1960s, transitioning from basic 2D drafting to advanced 3D modeling and simulation. Despite these advancements, slow design processes and rendering times remain critical challenges, leading to productivity losses, missed deadlines, and increased costs. It is trite to say that accelerating the pace at which we can design these systems is directly related to the pace at which societies can tackle critical problems.

AI can play a critical role in the product conceptualization, design, and optimization process. In this context, we’ve been actively exploring AI-driven CAE. However, we are equally excited about the role that generative AI can play in the earlier stages of the product design process. 

The rapid advancement of AI in design is paving the way for entirely new workflows in industrial design and CAD. These AI-driven workflows will fundamentally reshape the early design process by enabling ultra-fast iteration, real-time visualization, and seamless exploration of concepts. By drastically reducing the time required to generate initial renders, AI empowers designers to experiment more, refine their ideas faster, and create touchable, tangible prototypes much earlier in the design cycle. The ability to move rapidly from idea to visualization represents a fundamental shift in how products are conceived, opening up new opportunities for creativity and innovation.

However, while AI-assisted sketching and rendering significantly accelerate the concept ideation phase, the real challenge—and likely the biggest value creation opportunity—lies in automating the CAD modeling process. AI-driven CAD tools are emerging to bridge this gap, translating initial sketches and AI-generated renders directly into manufacturing-ready 3D models. 

We have two hypotheses on how this market might play out:

  1. The hypothesis is that nailing the AI-powered CAD generation process first is the optimal entry point into this new design paradigm. Once the AI can accurately generate parametric, production-ready CAD files, the natural expansion will be into AI-driven visualization and early-stage exploration—moving in the reverse direction from CAD to concept rendering, rather than the other way around. 

A key distinction in this evolution will be the complexity of the designed components. There will likely be different workflows for simpler, standalone parts (which AI can quickly generate and refine) versus larger, more complex systems that require deep engineering validation and assembly considerations. While AI may rapidly accelerate the early stages of visualization, the CAD side of the equation remains more technically demanding, making it the logical “wedge” to solve first.

  1. Depending on the complexity of a given product, AI-powered CAD may initially be limited to simpler parts, where automation can handle straightforward geometries, constraints, and manufacturability checks. In contrast, more complex assemblies and intricate systems (such as a car, aircraft, or industrial machinery) will likely continue requiring traditional CAD workflows with human oversight, at least in the near to mid-term. 

Depending on how long this takes, we may see the emergence of two parallel AI-driven tool categories:

  • AI-CAD for Simple Parts – Fully automated or highly AI-assisted CAD tools that quickly generate production-ready, 3D-printable components with minimal human intervention. (e.g., Backflip)

  • AI-driven design exploration for complex systems – AI-driven workflows that assist in iterative visualization, ideation, and optimization but still require manual refinement in traditional CAD tools for complex, interconnected systems. (e.g., Vizcom)

While the long-term goal is likely a seamless AI-driven pipeline from concept to production, the mid-term reality may be a split in AI tooling—one focused on streamlining simple part generation and another enhancing complex system design and iteration without replacing the full CAD process. The key question in this scenario is how long it will take for AI-CAD to handle complex systems.

Note: AI-powered CAD for industrial design must be distinguished from AI-generated 3D assets, which focus on visually appealing but non-engineered models for gaming and AR/VR. Unlike simple AI 3D generators, AI-CAD tools must produce parametric, editable, and manufacturing-ready models with real-world constraints, enabling seamless integration into production workflows.

July

Thesis

E(X)↑ C↓

On the emergence of "markets in everything"

April 2025

Digital Transformation

We are entering an era where the notion of “markets in everything” – long an economist blogger trope – is happening at never-before-seen speed and scale. Prediction markets are one obvious instantiation of this trend. 

But the opportunity is more expansive than that. Massive amounts of (real-world) data coupled with rapidly improving processing and sensemaking capabilities create the foundation for understanding and pricing previously opaque risks, as we have written about in multiple memos: new standards, insurance products, or ways to price externalities. We expect this trend to accelerate as AI compresses the time between cause and detection. Information asymmetries will close faster, enabling players to identify and act on risks sooner. We especially see an opportunity in areas where AI does not only help detect these information advantages (unless there is unique access to data, these are only temporary), but can help act on those in much more efficient and effective ways, shifting the expected ROI of specific opportunities.

One example we are excited about in this context is detecting fraud and corporate misconduct related to harmful products. Instead of waiting years for harmful products to be flagged by regulators, platforms can mine medical and consumer data to spot dangers early. Combining these insights, AI-enabled reduction in litigation costs, and increases in litigation success rate (e.g., as a result of early issue detection and case abandonment), the ROI associated with contingency fees can become increasingly attractive. The key question for such a business is how it can establish itself as the primary consumer platform for litigation that enables it to maximize its volume and acquire proprietary data at scale.

Another example is that this could give rise to a new class of distressed debt investors. AI streamlines distressed investing by quickly analyzing legal documents and covenants to pinpoint optimal positions in the capital stack while leveraging historical bankruptcy data to forecast restructuring outcomes under different legal regimes. Additionally, AI-driven litigation analytics and scenario modeling empower investors to decide between litigation and out-of-court restructuring to maximize recoveries strategically.

July

Thesis

Foundation Models for Structured Data

Unlocking the "action layer" across physical industries

March 2025

Industry Transformation

The rise of foundation models has transformed how we handle unstructured data like text and images. Yet many organizations still struggle to extract value from their structured data (tables, databases, time series). A study by MIT and Databricks found that only 13% of large enterprises are driving material business value from data and machine learning strategies – a problem unlikely to be eased as data volume grows exponentially, regulations expand, and talent shortages persist. 

Every industry runs on structured data. Faster, more accurate analytics directly impact operational decisions and bottom-line results, and we believe companies laser-focused on delivering this improvement have the potential to create and capture substantial value.

The current landscape around structured data is – and always has been – defined by bottlenecks: data integration across disparate systems, manual feature engineering requiring domain expertise, model selection involving extensive trial-and-error, and complex workflow integration. Foundation models address these challenges by learning from vast datasets to automate these processes end-to-end.

Recent breakthroughs like TabPFN demonstrate that models trained on millions of synthetic datasets can achieve zero-shot predictions on new tables, consistently outperforming traditional methods. These models eliminate the need to train from scratch for each new problem, creating a paradigm shift in how organizations approach analytics. The combination of these algorithmic advances with multimodal capabilities, evolving business models focused on capturing outcome-based value, and persistent talent constraints is accelerating adoption.

Despite the generalizability of many emerging approaches, we believe the complexity of the workflows required to make use of the data and insight necessitates a sharp focus on building industry-focused solutions (with, of course, a strong vision for sequencing into further segments). 

Better availability and assessment of structured data is, in fact, a core element of our “action layer” thesis across critical industries. 

  • Insurance underwriting benefits from automated risk scoring, incorporating structured profiles alongside multimodal data like claim photos.

  • Life sciences, particularly clinical trials, can leverage synthetic control groups from historical trial data, reducing required patient numbers and accelerating drug development timelines.

  • Supply chain operations leverage time-series foundation models for demand forecasting, recognizing patterns across products and locations by transferring knowledge from similar historical patterns.

  • Manufacturing facilities deploy models trained on sensor data from thousands of machines to predict failures before they occur—even for equipment that has never failed before—enabling proactive maintenance and reducing costly downtime.

  • Financial services models can process years of transaction data to identify patterns across thousands of variables, enabling faster, more accurate risk assessment and fraud detection.

Across the model landscape (for structured and unstructured data), we continue to be open to different arguments on where sustainable value will accrue in this ecosystem – i.e., where does the proverbial LLM blast radius end? 

While specialized foundation models for structured data have clear advantages over general-purpose models today, the pace of competition makes a sharper focus on structured data inevitable. Thus, durable advantages may ultimately lie not in the models themselves, but in the surrounding elements of the stack – proprietary data access, workflow integration, and physical world action layers – the mechanisms that translate insight and predictions into real-world operational outcomes.

July

World View

Hardware Development Tooling

Enabling the physical technology supercycle

February 2025

Infrastructure Tech

Our ongoing exploration of the hardware development stack, from AI-driven CAE to PCB automation, has consistently pointed us toward a fundamental challenge: the immense complexity of coordinating diverse tools, stakeholders, and workflows across the hardware development lifecycle. While individual design tools have evolved, the job of orchestrating these pieces – managing requirements, test data, manufacturing handoffs, and team collaboration – remains a major bottleneck.

As Western economies pour unprecedented capital into hardware innovation across aerospace, energy, and defense, an entirely new class of hardware companies is emerging. And they are building with fundamentally different expectations around tooling and development speed. The incumbent hardware solution stack fails to meet these heightened expectations – it is fragmented across systems, heavily manual, and lacks real-time visibility. 

As a result, we have seen many emerging hardware companies rolling their own solutions to solve internal and external orchestration across the development lifecycle. Stoke Space’s Fusion, an internal tool that they externalized, is one such effort. This trend, which we have seen inside of several other companies, signals both the severity of existing tooling gaps and validates demand for better solutions.

As such, we see a Stripe-like opportunity to enable and capture a portion of the value created by this new class of companies through the type of critical, but boring, infrastructure that we have deemed “plumbers for reindustrialization” in other research.

We see three primary areas of opportunity for new companies at the orchestration layer:

Test Data & Observability: The proliferation of sensors and testing equipment has created data noise that existing tools can't handle effectively. Real-time analysis of test data, coupled with AI for anomaly detection and optimization – DevOps-like telemetry and monitoring – could transform validation processes that historically relied on manual review and tribal knowledge.

Unified Data & Collaboration Hubs (Next-Gen PLM): The shift toward distributed engineering teams and expansive supply chains (e.g. via outsourcing) has exposed the limitations of current tools. Engineers spend a material amount of their time on non-value-added work like converting files, updating documents, or searching for the latest designs. Modern cloud-based hubs that unify product data (requirements, CAD, tests) could dramatically improve productivity.

Manufacturing Orchestration: The gap between design and manufacturing is a major bottleneck. Tools that automate the translation of designs into manufacturing instructions and provide real-time feedback on manufacturability could significantly reduce iteration cycles and costs.

New platforms built specifically for these emerging workflows – distributed by default, data-intensive by design, and automation-ready from the start – are naturally advantaged.

From a go-to-market perspective, focusing on emerging hardware companies helps orchestration companies avoid legacy processes and tooling and instead focus on shaping modern development workflows. These companies are building complex hardware under intense time (and capital) pressure – they need tools that can keep pace. As these tools prove their value to early adopters, they can expand both vertically (serving larger enterprises) and horizontally (connecting more parts of the development process). 

However, this means our thesis relies on this new class of hardware companies being a durable and scalable customer base. If the end game is dozens of sub-scale acquisitions and a select few successes – leaving today’s incumbent hardware OEMs as the primary market – the entrenchment of existing tooling and orchestration companies (from Siemens to Jama to PTC) will be harder to break.

Similar to what we have concluded in our research into AI-driven CAE, success doesn’t require displacing incumbent tools outright. Rather than competing head-on with entrenched CAD/CAE/PLM systems, new platforms can focus on making these tools work better together – becoming the connective tissue that coordinates modern hardware development. Once established as coordination layers, these platforms position themselves to expand their footprint over time.

The PLM and hardware development tooling market can already be measured in the tens of billions, but we believe the truly transformative companies will win by expanding the market and helping hardware companies iterate and build at the speed of software. This acceleration creates a powerful flywheel: faster development cycles enable more products, which drives increased tool usage and data generation, further improving development speed. Just as software development tools expanded their market by enabling faster iteration cycles, we believe the winners in hardware orchestration will grow the market by unlocking new levels of development velocity.

The risks are real – long sales cycles, integration complexity, and regulatory requirements in sectors like aerospace and defense. But we believe the confluence of market demand (driven by reindustrialization), technological convergence, and incumbent blindspots create a unique opportunity for new platforms to emerge.

July

Thesis

AEC Design Tooling

When will we see Figma for the build world?

February 2025

Industry Transformation

Autodesk is a $65B company with 90% gross margins and earnings growth of 10%+ annually over the past decade. It is, in the views of many practitioners in the ecosystem, a monopoly in the worst sense of the word – extractive price increases paired with degrading product quality, closed and proprietary standards that lock in customers, and a lack of feature-level evolution to meet the needs of architects, engineers, designers, and contractors.

But Autodesk is just a symptom of a deeper problem. The entire AEC technology stack has evolved to reinforce silos rather than break them down. Each specialty has its own tools, workflows, and data formats, creating artificial barriers between disciplines that naturally need to collaborate. The result is an industry that remains extremely inefficient – construction productivity has historically grown at under 1% annually despite billions spent on software.

Perhaps counterintuitively because of the stranglehold Autodesk (and other deeply ingrained products) holds, our early hypothesis is that focusing on design is the perfect wedge to transform this massive industry. Every project's design phase naturally brings together architects, engineers, contractors, and owners – each making decisions that cascade through the entire project lifecycle. This, in turn, creates the possibility to develop network effects (the type enjoyed by Autodesk) once at scale.

The question, then, is what creates the wedge for companies to build distribution in the first place and kick off the network effects flywheel – something that has been a challenge for new entrants, as evidenced by the lack of massive venture-backed outcomes to date. We believe several converging technologies are coming together to massively reduce the costs of experimentation, lower the barriers to real-time design collaboration between parties (minimizing the cascading delays that begin at the design phase), and expand the creative canvas of design possibilities.

  • WebGL and cloud-native architectures finally enable true browser-based 3D modeling at scale. Just as Figma used these technologies to make design collaborative, new platforms are rebuilding BIM from first principles for seamless multi-user collaboration

  • Advances in physics-based simulation and generative AI allow instant validation of design decisions - emerging tools can compress structural engineering workflows from weeks to minutes and automatically optimize building systems for performance.

  • New platforms are bridging design and construction by translating BIM models directly into fabrication instructions, creating the potential to significantly reduce MEP installation costs.

We see three approaches emerging to leverage these technologies and begin embedding them into multi-stakeholder workflows:

  1. Next-gen cloud BIM platforms (e.g., Motif, Arcol): Browser-first collaborative design tools – i.e. "Figma for buildings". Here, we believe companies can build momentum through counter positioning – API-first models that prioritize open document and data standards.

  2. AI-powered point solutions (e.g., Genia, Qbiq): Focused tools that dramatically accelerate specific workflows. Genia automates structural engineering analysis and optimization, while Qbiq uses AI to generate space planning options for real estate teams.

  3. Design-to-fabrication platforms (e.g., Stratus): Bridging the gap between design and construction by automatically translating BIM models into fabrication-ready instructions. Stratus has shown particular success in MEP, where better coordination can significantly reduce installation costs.

The path to end-to-end orchestration will follow a clear sequence: Start by connecting architects and engineers through real-time design collaboration. Then extend to contractors, automatically translating designs into construction planning. As the platform becomes the system of record for design and planning decisions, it can naturally expand into procurement, payments, and project financing - using its unique data position to reduce risk and unlock better financial products. Eventually, these platforms could have a shot at orchestrating the entire building lifecycle - from initial concept through operations and maintenance.

Most importantly, these platforms will enable fundamental shifts in business models and incentives. Today's hourly billing and fixed-fee structures actually discourage efficiency - architects and engineers are paid for time and deliverables, not outcomes. But platforms that can measure and validate impact could enable new performance-based pricing models. Early adopters might start with simple metrics like design iteration speed or coordination time saved. Over time, as platforms gather more data across the building lifecycle, they could facilitate true outcome-based contracts where designers and engineers share in the value they create through better, faster, more efficient projects.


July

World View

Agent Authentication

Enabling active agent decision making

January 2025

Infrastructure Tech

As AI agents evolve from passive copilots into active decision-makers — scheduling meetings, replying to emails, pulling sales reports — they inevitably confront a critical bottleneck: authentication. Without secure access to user data and enterprise systems, agents remain limited in scope and incapable of delivering real value. The future of agent utility depends not on marginally smarter models, but on building trust, control, and access into the very fabric of how agents interact with the digital world.

Today, there is no unified solution to this problem. Developers are left stitching together OAuth flows, manually storing API tokens, and hoping their agents don’t inadvertently leak sensitive credentials. Existing identity infrastructure, like Okta or Azure AD, is built for humans — not autonomous actors. Meanwhile, model providers like OpenAI intentionally avoid the liability of handling real-world authentication and access. This has created a deep infrastructural gap: there is no secure, reliable, and developer-friendly layer for agents to act on behalf of users across different tools and services.

The company that solves this will not only unlock the next phase of agent capabilities, but also define a new category in developer infrastructure. It will offer a clean, programmable interface for granting agents time-limited, scoped access to perform tasks securely — whether it’s sending an email, querying a CRM, or posting in a team chat. Consent, identity, permissioning, and auditability will all be handled by design. Prebuilt integrations to common services, combined with a powerful SDK, will abstract away the complexity of working with dozens of authentication flows. Crucially, it will shield agents from direct exposure to tokens or secrets, ensuring compliance and mitigating risk.

We believe that such a company will gain early traction among developers building agents for productivity, customer support, and internal automation. But its long-term power lies in standardizing the very way agents access and act within digital systems. Just as Stripe abstracted payments and Twilio did the same for communication, this platform will abstract action itself. It will be the trusted intermediary between intent and execution, forming the backbone of an emerging agent economy.

In doing so, it won’t just authenticate agents — it will enable them. And that will make it indispensable.

July

Thesis

Geospatial Intelligence

The complexity of understanding and managing our physical world is increasing exponentially.

January 2025

Infrastructure Tech

Why is this important?

The complexity of understanding and managing our physical world is increasing exponentially. Climate change creates both acute (e.g. wildfires) and chronic stress on our (aging) physical infrastructure. Supply chains are becoming more intricate and, in the face of geopolitical tensions and the energy transition, are reconfiguring on a global basis in real time. 

Geospatial intelligence – novel physical world data captured via optical, multi-spectral, hyperspectral, and other advanced sensor systems via satellites, ground stations, and other modalities – represents a critical substrate for building the advanced action layers (e.g. truly comprehensive world models) that will power fundamental industry transformation in areas like mining, energy, agriculture, and defense. 

However, the trajectory of the geospatial intelligence market has largely been a story of massive perceived potential and disappointing results for builders, investors, and customers. While the use cases have been evident for decades, commercial value at scale has been slow to materialize and the net ROI of most earth observation companies has likely been negative. Adoption has been broad, but shallow – few commercial customers spend more than $1m per year on data and value-added services related to geospatial intelligence. Leaders on the upstream data collection part of the value chain (like Airbus and Maxar) still rely on government customers for a majority of their business while companies like Planet Labs still struggle to project commercial demand from quarter to quarter, indicating a lack of urgency to the data and analysis being offered. 

Solving the bottlenecks around geospatial intelligence that have kept deep commercial adoption out of reach – namely expensive data acquisition costs (for high fidelity data), fragmented data accessibility, and a lack of connectivity from data to core enterprise/industrial workflows has substantial implications for economic growth and human flourishing. The World Economic Forum projects that geospatial intelligence, as a platform technology, has the potential to drive more than $700 billion in economic value annually by 2030. A vast majority of this value will be created in critical physical industries – transforming land use, mitigating natural disasters, transforming how we build and maintain infrastructure, reducing greenhouse gasses, and addressing security and safety issues more proactively. 

Why is this interesting? 

We believe these bottlenecks are finally beginning to fall thanks to two converging factors – technological step-changes and the emergence of urgent buyers for the key technological building blocks that will make cheap, precise, and actionable geospatial data possible. 

  • Launch costs have fallen 80-90%, enabling massive sensor deployment. While it took nearly 60 years to put 2,000 satellites in orbit, we launched 3,000 in 2023 alone

  • Next-generation sensors are achieving unprecedented coverage and precision. Today's systems can detect not just the presence of objects but their composition and behavior from hundreds of kilometers away, at sub-meter resolution

  • AI and compute advances have collapsed processing times and made it possible to for non-specialists to make sense of multi-modal data – what took human analysts years now often takes minutes

The demand side pull, while still not fully materialized, is equally as important and developing quickly:

  • Insurance companies – and the entire insurance model – face existential pressure from climate-driven catastrophic losses (and regulatory intervention). Beyond risk assessment capabilities, improved, more transparent/accessible tooling can help to rebuild trust in this important segment of the financial system. 

  • Autonomous systems (and with it shorter decision-making windows) are increasingly factoring into defense and intelligence operations, putting a premium on breaking down the current data silos to develop advantaged (precise and real-time) sensemaking capabilities.

  • As we have observed through KoBold, the energy transition is creating entirely new customer segments (and forcing agility from large incumbents) focused on critical mineral discovery, methane detection, and other resource categories like water or forestry. 

  • Infrastructure operators, utilities, and construction firms are scrambling to maintain the trillions of dollars of assets needed to reindustrialize, electrify, and – more critically – simply keep the modern way of life (e.g. clean water) running. Proposed initiatives like The Stargate Project create another major tailwind for the geospatial intelligence market. Above are just the handful of use cases we have been most exposed to through our investments and research. Like most great platform technologies, though, we believe many of the most valuable applications will be emergent. Thus, as we look at investments in the category, we are most interested in companies positioned to surf rather than compete with the dual blast radii of LLMs and Space Launch.

Which areas are most investible? 

Sensor Advantage / Infrastructure → While much of the sensor stack is being commoditized, competition at the powerful world model level (e.g. Niantic’s LGM) will drive demand for truly differentiated imaging and sensor suites. High precision, platform agnostic, high bandwidth, and real-time hyperspectral imaging stand out.

Data Fusion → As launch (and other sub-orbital geospatial sensor deployment) grows exponentially, data generation will scale along with it. If the status quo holds, silos and the need for bespoke solutions will only worsen. There is a Snowflake-scale opportunity to build data warehousing and piping for multi-modal geospatial data.

Geospatial Data as an Industry Transformation Wedge → Similar to Gecko in robotics, we believe the most valuable geospatial companies won’t be thought of as geospatial companies when all is said and done. Instead, we see major opportunities to use geospatial data as a wedge to build the workflows and intelligence engines that transform physical industries.

July

Thesis

Industrial Energy Efficiency

Energy demand is rising for the first time in over a decade thanks to rapid electrification, reshoring of manufacturing, and perhaps most notably, AI.

January 2025

Industry Transformation

Energy demand is rising for the first time in over a decade thanks to rapid electrification, reshoring of manufacturing, and perhaps most notably, AI. This demand is being driven top-down via policymakers and bottom-up from the private sector. Regulations like the IRA and CHIPs Act have driven significant growth in new manufacturing construction. Meanwhile, energy constraints have overtaken GPU availability as the core bottleneck to scaling AI for companies like Microsoft, Google, Meta, and OpenAI. 

The willingness of big tech companies to spend whatever is necessary to access energy in the name of AI has led to amusing re-estimations of future data center energy demand every few months.

“[Our expectation of] 83GW is up from ~56GW from the prior September 2023 modeling. Overall McKinsey now forecasts US data center energy consumption in terawatt hours (TWh), rising to 606TWh in 2030, representing 12% of total US power demand. Critically, this is up from ~400TWh in the September 2023 modeling refresh. This is relative to 147TWh in 2023 and 4% of overall US power demand.”

Meeting this energy demand, whether in service climate objectives or geopolitical, energy, and technological sovereignty priorities, is of existential concern to economies around the world. As the saying goes, there is no such thing as an energy-poor rich country. Europe, in a trend that has continued since Russia invaded Ukraine, continues to struggle to meet the energy-related needs of its industrial champions. This has pushed them in droves to the US and other geographies, putting the continent’s competitiveness, productivity, and growth at risk. 

Energy abundance generally and response to data center demand specifically hinges on three important pillars: new power production, better transmission and distribution, and more efficient utilization.

As highlighted in other research, owning and operating physical assets can provide companies with a tremendous moat and allow them to capture more of the value they create. For this reason, companies focused on new power generation or physical infrastructure related to better transmission and distribution are interesting. However, such opportunities are often held back by factors like permitting that are outside their immediate control. 

Efficiency, on the other hand, is a problem best addressed by software and AI. This is particularly true for commercial and industrial buildings, which account for ~ 20% of final energy use (and rising thanks to the growth factors highlighted above). In some places, like Ireland, data center use alone promises to consume nearly one-third of grid capacity in the near future. As energy costs become a more substantial profitability factor and increased consumption puts pressure on sustainability objectives, better solutions for commercial and industrial energy efficiency represent one of the biggest opportunities of the next several decades.

Many of these operations have concrete optimization functions with goals and constraints. However, in many cases, the degrees of complexity of the world are too large for humans to grasp. Therefore, we fail to set up appropriate optimization functions and systems around them, leading to systems far from their global optima. That’s where we see massive opportunities for reinforcement learning. Advanced RL has enabled us to address areas previously unfeasible to optimize for due to their levels of complexity. 

Managing the energy usage of highly energy-intensive operations (e.g., data centers, cooling facilities, etc.) fits these criteria. RL models are capable of driving significant performance improvements autonomously, saving substantial energy and cost. The team behind Phaidra, one company that applies these models, was started by a team of Google employees who deployed these methodologies at Google data centers and saw up to 40% energy savings. They recently announced that they could drive energy savings of 16% at Pfizer’s data centers. Meta has published similar efforts. 

One of the key questions is whether there is enough quality data from sensors to support these plans and whether there is enough digitization of the physical world (and of its controls) for models to drive actions in the physical world. It is likely reasonable to assume that digitization has penetrated well enough for us to have a reasonable granularity and actionability, but the assumption is that the more data and the more actionability, the better. 

This field sits at the intersection of two areas that are core to our broader AI theses: 

  1. Massive economic will happen in the physical world.

  2. The most interesting AI use cases are in areas where AI helps us develop an understanding of the world where complexity is so high that we previously could not. In previous writing, we have referred to these as “expansion” use cases. 

Moreover, similar to KoBold, we expect that building a company in this space will require hiring world-class people across various fields: 1) AI/ML, 2) Software, and 3) Optimization of niche systems. We believe that for companies, being able to build a company that combines these three talent sources will build up substantial talent moats.

July

Thesis

Spontaneous Software

As LLMs can create software cheaply and agents become skilled at connecting user experiences in novel ways, companies are starting to push ideas around self-assembling/spontaneous software.

January 2025

Fundamental Consumer

As LLMs can create software cheaply and agents become skilled at connecting user experiences in novel ways, companies are starting to push ideas around self-assembling/spontaneous software. We believe that, enabled by LLMs, a new paradigm could be on the horizon that increasingly merges the creation and consumption of software and makes a longstanding vision a reality.

We have previously written about this in the context of business software (see here), but we see an equally interesting opportunity in pro/consumer software and applications. It is important to stress that this is an incredibly nascent area with more questions than answers. 

The few questions we have are: 

  1. Where does this happen? For these experiences to feel genuinely magical and software to feel spontaneous, LLMs must have the maximum context of a user's digital expertise, data, and usage patterns across applications. The most likely place for this to live is within a user’s operating system. Assuming operating systems are too slow to adopt, apps will likely emerge. However, it is unclear how long their staying power will be and how useful these will be if the tools and experience they create/enable are removed and not interconnected with default operating systems. In that case, the default place where these things live could be the interfaces of the large LLM providers. Claude has taken steps in that direction. 

  2. How do these systems' privacy mechanisms work? As described above, they require a lot of context to feel magical. The question is how this context is handled privately. Some approaches mitigate risk, such as private cloud enclaves, but there could be a world where these kinds of applications only start taking off when models have 1) memory and 2) can run on consumer devices (e.g., phones and PCs).

  3. What do monetization and business models look like here? It is unclear how much users will pay for custom software tools, especially if this requires work/creating tools. Only 30% of Android users customize their OS, and the current app paradigm has not trained people that they need to pay for utility-type services (this is the result of a combination of tools as a way to lock in as well as ad-supported models). In a world where apps become cheaper to produce and likely more abundant (due to the same dynamics discussed here), it is unclear whether most users will not just use apps that are increasingly available for niche use cases until software becomes completely self-assembling, assuming users every intent ahead of time. 

If we find good answers to these questions, we will be excited about this space and its potential.  

July

World View

Digital Antidote

Touching grass.

January 2025

Fundamental Consumer

As culture becomes more homogenous and consumption more solitary (a conundrum we wrote about in Airspace and Bubbles), consumers increasingly crave ways to identify with 1) physical brands, 2) physical/ephemeral experiences, and 3) their local/smaller communities and their local brands. 

While this can take many shapes, we see the potential to build a significant business around that and keep our eyes open for them. To give a few examples: 

  • Next-generation sport leagues

  • Stong local restaurant brands and emerging subscriptions, events, etc.

  • Increased inflow into mega-churches that offer smaller group gatherings 

  • Local Fashion Brands (e.g., Bandit)

  • Athlete/chef retreats (e.g., Adam Ondra clinic; Mads Mikkelsen Japan Trip) 

  • Running clubs for dating

  • ...

That being said, there are some structural challenges around how scalable these things are and to what extent they are venture cases.


July

Thesis

LLM-enabled Toys (Care Companions)

LLMs are enabling novel embodied AI use cases.

December 2024

Fundamental Consumer

LLMs are enabling novel embodied AI use cases. We expect that it is a high probability that in 5 years, most toys, from stuffed animals to action figures to Barbies, will have some kind of LLM-enabled voice capabilities. We see a few benefits associated with these LLMs: 

Naturally, we believe that data privacy and safety are crucial to these toys being beneficial and successful. Therefore, we believe them to have the following properties: 

We see an interesting opportunity for a commercial player to emerge here. Specifically, we see an opportunity to build an operating system that meets the standards above and enables owners of IP and distribution to build on top. In addition, we see significant opportunities to extend this platform in other areas, such as elderly care.


July

Thesis

Unlocking Tacit Knowledge Through Constellations of Experts

The relationship between individual and organizational performance has historically been governed by management frameworks – from Albert Sloan's GM to Andy Grove's creation of modern OKRs at Intel.

December 2024

The relationship between individual and organizational performance has historically been governed by management frameworks – from Albert Sloan's GM to Andy Grove's creation of modern OKRs at Intel. These systems attempted to solve the challenge of measuring, improving, and scaling human potential across an enterprise. Yet they remained constrained by the limits of human observation and the difficulty of capturing tacit knowledge – the intuitive expertise that defines mastery of a task but has, thus far, mostly resisted codification.

Over the last 20 years, "game tape" and statistical sophistication have revolutionized athletics (and other highly quantifiable professions like enterprise software sales) by enabling precise feedback loops, accountability, and recognition. AI is now driving a similar transformation of the broader professional universe where the relationship between inputs and outputs is often harder to grasp. Professionals have always valued mentorship and coaching. But access has historically been limited by cost and scale (hence “executive” rather than “employee” coaching. AI promises to democratize this type of performance enhancement (and an organization’s ability to measure it) in the same way that companies like Synthesis address Bloom's Two Sigma problem in education. 

Our hypothesis is that “constellations of (AI) experts” – deployed across every facet of professional development and organizational performance – will become as fundamental to career success as mentors and coaches are to elite athletes today. Several converging catalysts are making this possible. 

  • The mass market deployment of co-pilots and proto-agents has rapidly normalized AI-human collaboration. More than 60% of physicians now use LLMs to check drug interactions and support diagnosis – early evidence of adoption for high-leverage decision support. 47% of GenZ employees say ChatGPT gives better career advice than their boss – signaling dissatisfaction among young workers with the status quo.

  • The proliferation of audio/video capture in professional settings generates rich data to help these systems better understand and improve performance. People increasingly operate with the assumption that every call is recorded, while younger employees regularly go viral for sharing layoff videos online. 

  • The economics of AI are reshaping both organizational structures and individual incentives. Companies are shifting from fixed to variable cost models, flexing labor (human and agent) up and down based on demand.  This, in turn, is shifting how workers are measured and compensated. As a result, professionals must proactively adapt to succeed in this new paradigm where human judgment and AI capabilities become increasingly intertwined.

We see several areas where the “constellations of AI experts” will be professionally valuable. In each of these categories, we expect the most successful platforms will combine automated interactions, human experts in the loop, and content/validation that come together to create holistic systems of improvement. 

  • Organization-wide solutions that integrate deeply with company context to provide AI-powered coaching and feedback loops. While employees have shown a willingness to trade privacy for better tools, trust and security guardrails are essential. 

  • Individual-focused platforms that grow with professionals throughout their careers, combining performance enhancement with credential creation in an increasingly fluid labor market. 

  • Solutions for high-turnover industries that capture and distribute best practices to improve training efficiency and retention (e.g. frontline audio-first interfaces)

  • SMB owner enablement systems in areas like the skilled trades and family medicine, to make it possible to i) capture and transmit tacit knowledge (streamlining hiring/training while increasing terminal value) and ii) help operators compete without needing to bring in expensive consultants or PE expertise

These are, to be clear, highly divergent use cases that necessitate different product philosophies, business models, and competencies from the companies building solutions. However, they share important characteristics, namely that they all represent opportunities to use AI and better data to make professional tacit knowledge, action, and context visible and measurable, unlocking precise intervention to help individuals (and by extension teams and companies) grow into their potential. 

July

Thesis

AI-Enabled Asset Ownership

When to sell to incumbents vs. when to compete.

November 2024

Industry Transformation

For companies deploying AI in legacy industries, a key question is whether to enable incumbents by selling them solutions or compete with them by taking a more full-stack approach. The trade-offs between these two models is something we have started to explore through our AI-enabled services analysis and this piece on when to compete with and when to sell to incumbents in an industry.

Recently, several firms have shared public theses on the opportunity for emerging AI companies (or vertical market software companies) to capture additional value in a given value chain by fully integrating via the acquisition of assets – as opposed to selling solutions to incumbents or taking a more organic (build instead of buy) approach to going full stack.

Slow, which helped finance the $1.6b acquisition of parking operator SP+ by software company Metropolis, calls the model “Growth Buyouts”. Equal Ventures, which recently opined on the opportunity for such a model in insurance calls it “tech-enabled consolidation”. Vertical market software investor Tidemark calls the approach “tech-enabled vertical roll-ups”. Re:Build Manufacturing calls its technology-driven manufacturing roll-up model an “American Kieretsu”. 

Our current hypothesis is that while the AI-enabled acquisition of services businesses (with venture dollars) may not be wise, there is a significant opportunity for emerging AI, software, and robotics companies to capture more value and develop value chain control by acquiring legacy assets in physical industries. 

For decades, venture capitalists have been involved in what seems like Sysyphosian tasks: digitizing businesses that operate assets in the real world. For many reasons, from software shortcomings to incentive problems and structural challenges in GTM to a lack of skilled labor on the consumer side. We see a trend for novel ML models to solve the first of these problems by being able to operate assets end-to-end without much human input. Yet the latter challenges remain. Therefore, AI-native companies addressing these problems are prone to leave value on the table and, due to slow adoption, likely are slower at training and developing their models, preceding a lot of additional value. Therefore, AI-enabled asset ownership represents one path to achieve this. 

Sequence matters for companies that go down this path. Companies should prove they can build technology and deliver ROI (for early customers or via a smaller-scale organic full-stack approach) before embarking on buying distribution via M&A. The only cases where early M&A can be attractive are cases where smaller targets that are structurally very similar to large-scale targets in the market can be acquired for amounts smaller than traditional GTM. Initially, these businesses have venture risk profiles, and only after the second or third large acquisition should they be derisked and become predictable/repeatable enough for investors with a lower cost of capital – Infra, PE, etc. – to consider participating. By reaching this point, venture investors will have seen highly attractive returns. 

Initial Hypothesis on Key Conditions

Below is an initial hypothesis for when it makes sense for a company to vertically integrate via acquisition as opposed to doing so organically or remaining a software/technology vendor to a given industry:

  • The company must have a demonstrated “production advantage,”; i.e., a clear technological or product edge that creates compounding value in an industry. Companies leveraging exclusively off-the-shelf technology likely lack the differentiation to deliver venture-scale outcomes even with strong operational execution and financial engineering. If a PE fund working with Accenture can go after an opportunity or if human labor is cheaper on an efficiency-adjusted basis it is unlikely to be a VC case. If solving the problems requires a combination of world-class technologists AND operators, this becomes an interesting opportunity for venture-style risks and outcomes. 

  • Customers have proven structurally unable to adopt and deploy a company’s solution to its most productive extent. Alternatively, they seem unwilling to pay for the full value of it. This can be due to various reasons, from lack of scale to incentives leading to stasis in a given market (“if my competitor doesn’t innovate, I don’t have to”). We should be able to pinpoint a structural issue – and generally point to evidence from a company’s own experience –  with a given market/customer base to be certain the ineffectiveness is not a product issue. 

  • Building on the previous criteria, companies spearheading the buy-out strategy should be building technology that changes the core way an asset is operated, transforming the economics of the business/industry. Most likely that is where existing operators are (somewhat paradoxly) least incentivized to adopt technological disruption. This is what makes the Metropolis acquisition of SP+ such a compelling application of this approach. SP+ has 3,000+ parking locations around the world where the core customer experience (paying for parking) can be largely automated. While the “work around the work” (maintenance, security, etc.) still requires people, the ROI around the primary transaction is much easier to understand than situations where the AI solution is helping people deliver the primary solution more efficiently (e.g. home services models, legal services, etc.). 

  • Likely, there is a sweet spot around the level of complexity that goes into operating an asset that makes it a fit for AI-enabled acquisition. Complexity can either stem from the core value proposition being complex, several offerings being performed at the same asset leading to compounded complexity or the “work around the work” being significant (e.g., for regulatory reasons). Too little complexity at the core value proposition becomes a PE case; too much and the operational overhead reduces the leverage associated with improving the margins of the core asset. Ideally, the complexity/problems across holdings within the same space should be the same (e.g., parking lots), and skills easily transferable. We should be able to pinpoint these levels of complexity and identify assets/problems where they meet the sweet spot. 

  • The category a company is operating needs to have acquisition targets that are operating at scale (ideally businesses worth USD 1B+ with additional value creation in the several hundred million – further analysis on this needed). Buying assets operating at scale that can be fully optimized and automated via AI is substantially more attractive than rolling up locally-delivered services businesses. Again, this is what makes the SP+ acquisition so attractive, SP+ has 3,000+ parking locations around the world that likely are all run very similarly. Ideally, solutions deliver not only cost savings but also growth opportunities. We are also interested in companies with a view on how the integration of software and legacy assets will unlock increasing ecosystem control and turn the business into an industry operating system. 

  • Companies must have advantaged access to talent across functions. It is rare for a founder or founding team to understand “what great looks like” in areas where they have not had direct experience. A team of software engineers is likely unfamiliar with what makes a great industrial CFO or service-business COO. As a result, we may expect the pool of founders well-equipped to build such a business to be small. We have seen this play out at companies like KoBold Metals, which combine highly scientific founding teams with business acumen. 

These criteria still don’t fully answer why/when it is better to grow a full stack solution via acquisition rather than a more organic approach. One primary reason a company would choose to grow via acquisition is if the geographic footprint and surrounding “infrastructure” of an industry will look largely similar in the future as it does today. In such cases, the speed of distribution created by acquisition is enough of an advantage to overcome the accompanying cultural and organizational complexity that could be mitigated with a more organic strategy.

To use the Metropolis example, should we expect the footprint of the parking industry to be more or less the same in 10 years as it is today? While autonomous vehicles may make some impact on the margin during that time period, the inertia of the built environment probably means we should expect the flow of traffic and parking to remain relatively the same (airports, stadiums, commercial centers, etc). 

A counter-example is H2 Green Steel, which has raised multi-$B to produce steel with 95% lower emissions than traditional steelmaking. Because of the fact that the company’s steel production depended on access to ample clean energy, the company couldn’t just acquire and transform underperforming steel facilities despite the similarity in equipment, permitting, and human capital needs. Thus, to transform the industry around their vision, the company was forced to build a more organic approach. 

Companies also might pursue a buy instead of build strategy when the technology can be easily integrated with existing assets and infrastructure, substantially reducing time to value for a given solution. 

There are likely several other criteria in support of (and against) the strategy of vertically integrating via acquisition which need to be explored in further depth. 

July

Thesis

Energy Grid Data

I mean, data is in vogue now, and people
are really kind of a bit obsessed with data and
data companies.

November 2024

Industry Transformation

As we move to more renewable energy production and electrifying our consumption, the problems we must solve to modernize the grid are becoming more complex. This need is further amplified by strong increases in demand associated with increased data centers for AI. High-quality data is a crucial infrastructure to ensure that we understand the electric grid and make the most impactful decisions when operating it and investing in it. We believe there is substantial value in unlocking access to such quality data, from avoiding grid outages due to overload to increasing the ROI on making maintenance and new investment decisions.

At the same time, there are substantial issues associated with accessing quality data on the U.S. power grid: 

Fragmentation
The grid is divided into regional entities, such as the Eastern, Western, and Texas Interconnections, managed by various utility companies, independent system operators (ISOs), and regional transmission organizations (RTOs). 

Lack of Standardization
This fragmentation leads to diverse data sources and inconsistent reporting practices, making compiling comprehensive, high-quality data​ difficult.

Non-centralized energy sources
Additionally, the rise of distributed energy resources (DERs) like solar panels and electric vehicles adds complexity. Data on these resources is often fragmented and incomplete, complicating grid balancing and forecasting efforts​.

Privacy and security
Concerns around this restrict access to detailed grid data, as releasing such information could expose vulnerabilities to potential threats​.

While several initiatives (e.g., NREL, IEA) by various government agencies and NGOs to address the abovementioned challenges have been underway, none have impacted the market where easy open data access has been achieved. 

Therefore, we see a unique opportunity in a dedicated effort to aggregate the various data sources and make them available in a standardized format via API and application. The availability of such data can be the underpinning for a wide array of new applications and use cases that require such data (e.g., applying Reinforcement Learning-based optimization to the grid) and can be substantially improved if such data exists. In short, we see an exciting opportunity for the company that can aggregate and maintain the highest quality grid data to be the nexus of an emerging ecosystem.

July

Thesis

Nature Intelligence

We have been inspired by the field of digital bioacoustics ever since being introduced to this field through Karen Bakker’s work.

November 2024

Infrastructure Tech

We have been inspired by the field of digital bioacoustics ever since being introduced to this field through Karen Bakker’s work. We believe there are a few factors that drive the emergence of this field. For one, sensors are becoming smaller and cheaper while edge processing and memory capabilities increase. The broadened availability of these sensors has led to an increase in domain-specific physical world data – a recurring theme in categories we get excited about – that can be coupled with complementary data sources. Coupled with algorithmic breakthroughs, this data can be used in a host of interesting cases: 

  • Biodiversity monitoring: We believe that biodiversity is a crucial cornerstone of a climate-resilient ecosystem and world. Tracking biodiversity in a cost-effective and accurate way has a clear ROI for a host of different shareholders. Bioacoustics augmented with different data sources seems to be an attractive way to achieve this. We see an opportunity to create an objective standard around this kind of data that can be critical to unlocking the emerging commercial ecosystem.

  • Optionality in collecting novel Nature Data: As we collect more data about our ecosystems, we will see emergent use cases for this data. 

    • We see a world where enough data on ecosystems is collected so that we can predict the trajectory of an ecosystem and take the measures/actions to maintain it. Potentially, this could enable the fast regeneration or creation of novel and healthy ecosystems from scratch.

    • Building more sophisticated bioacoustic models can allow us to develop a more granular understanding of the natural world (e.g., tracking the healthiness of individual plants or animals vs. entire ecosystems), which will drive novel use cases in agriculture and beyond.

    • We have been excited about human-to-animal communication for a while and have been following the work that organizations like the Earth Species Project are doing. While concrete use cases will likely only emerge as we develop these models and understand their capabilities and limitations, proven applications such as navigating bees and deterring elephants from entering farms already show promising signs of impact and ROI.

    • As followers of the Santa Fe Institute, we are convinced that interdisciplinarity in building complex systems is conducive to human advancement. Developing a deeper of nature’s complex ecosystems to inspire our manmade systems in novel ways holds a significant upside. This is the core thesis behind our investment in Sakana AI.

    • We see the potential for bioacoustic data to resonate with consumers. For example, users could listen and interact with ecosystems (e.g., their local forests).

We see an exciting opportunity in an orchestrated commercial effort to bring the research from recent years into the field and deepen our understanding of nature and the positive upside that comes with that.

July

Thesis

AI Movie Workflow Suite

AI video content creation will likely diverge into two paths.

November 2024

Industry Transformation

AI video content creation will likely diverge into two paths: high-quality productions that capture and create wider cultural moments, and lower-quality, personalized content. Consumers are expected to value both types, making tradeoffs between production quality and personalization based on their needs.

High-Quality AI-powered Content – We believe that world-class creative talent is attracted to tools and places that enable them to realize their creative ambitions. Given AI's economics and possibilities in the creative process, it will become an indispensable tool for the best creators. We appreciate that AI models today cannot, on a standalone basis, generate world-class content on par with Hollywood-grade. We believe that the foreseeable future will require holistic tools that enable outstanding creative talent to tell great stories with captivating visuals. Therefore, we see a unique opportunity to marry the capabilities of the most advanced AI models (across relevant layers) with an interoperable software and workflow suite. 

We believe there is substantial economic value and options associated with successfully building out such a suite:

  • An AI-powered suite can wedge its way into a software market that has seen little innovation. As talent makes the availability of such solutions a key choice in who to work with (e.g., studios), most major studios will likely have no choice but to adopt the solutions demanded. If played correctly, such an AI-enabled suite can replace existing tools and, over time, set new standards.  

  • We see opportunities to selectively go end-to-end and enable the build-out of a full-stack AI-enabled movie studio/production company. 

  • We see substantial opportunities to expand into other mediums (e.g., gaming).

Low-Quality AI-powered Content – On the other side of the spectrum is lower-quality, highly personalized, rapidly produced content that can be generated by small creators and, ultimately, by the user (either actively or passively based on preferences). This will not require dedicated workflows with large consumer aggregators of consumer(e.g., Netflix, Meta, YouTube, etc.) but instead will be captured by companies uniquely positioned to democratize easy access to video generation models, automated content aggregation, and distribution.

From a venture perspective, we are especially excited about the opportunity associated with the former but believe there will be large companies built in the latter where emerging companies can identify and engage high-value niches that fall outside the core focus of existing platforms (e.g. sports).

July

World View

Consumer AirSpace and Bubbles

There is a palatable sense that we are in for a major disruption of the way we currently spend our time and money.

October 2024

Fundamental Consumer

Working Thesis
There is a palatable sense that we are in for a major disruption of the way we currently spend our time and money. There are a few underlying trends (some of them might appear at odds with each other):

Consumers are increasingly living and consuming in two spaces that are drifting apart: 

Increasingly homogenous AirSpace
Globalisation and innovations in mass production and marketing gave rise to global consumer brands and the first wave of a globally flattened culture. The internet put this on steroids - the same memes, music, and clothes are available almost instantly everywhere. The experience economy, initially a backlash against this homogenisation, has been commoditised. Uber from the airport to your similarly designed Airbnb, whether in Miami, Mumbai or Marakesh. Scale wins, and to achieve that scale you have to work with social media and search engine algorithms, which tend to surface the most mainstream goods and content (because it is the least risky and most profitable), thereby reinforcing that mainstream for consumers. The same is happening in film, where studios are increasingly focusing on mainstream features. We use the term AirSpace coined by Kyle Chayka for this phenomenon of increasing homogeneity.  

We expect the emergence of generative AI to further reinforce the unification of mainstream content. By definition, these algorithms probabilistically create the type of content they are expected to develop based on their training data. As the cost of creating generative content comes down, this will create massive amounts of predictable content that fits squarely into AirSpace and lacks the unexpected. 

Increasingly Heterogenous Personalized Bubble
At the other end of the spectrum, there is a strong trend towards individualised content consumption. Due to the abundance of on-demand content (e.g. Spotify, Netflix), there is a shift towards consuming content on demand and in a highly personalised way. While there are benefits to this type of content consumption, it also makes the content that each of us consumes predictable, as our individual consumption preferences are understood and reinforced by recommendation algorithms. 

As a result, our shared cultural fabric, which is an important medium through which we connect with each other, is being eroded. For example, in its final season in the late 90s, Seinfeld was consistently the number one show on television, averaging 22 million viewers per episode, who watched the episode simultaneously and discussed it in the office the next day. In 2023, the most watched show was Suits, which premiered in 2011 and had its final season in 2019 - we saw it come up in zero conversations in 2023.

We expect this to increase as AI-generated content becomes increasingly viable. We see a not-too-distant future where content across all media and potentially all levels of quality is created for an audience of N of 1, highly tailored to each individual's preferences. 


What we believe to be true about the human psychology and sociology
People like trends and the comfort they bring. So AirSpace is not bad and will continue to exist. However, there is likely to be little room for innovation; large aggregators exist (e.g. Meta, Google, Airbnb) and will continue to monetise this in the best possible way.

Humans like to consume the content they enjoy, and that reinforces their bubble. The more personal, the better. Hence, the Personalized Bubble is not bad. We expect this to get much weirder from here as application developers and consumers lean into AI-powered use cases. Character AI was chasing this opportunity, but a team of former Google researchers was unlikely to embrace the weirdness. 

People like to consume authentic, unique things. However, much online content lacks authenticity/quality/care and is predictable. Gen AI is the straw that breaks the camel's back as the cost of content creation trends towards zero (or the cost of computing). 

As a result, there has been a noticeable shift in how large parts of our digital lives are moving either to group chats (which can act as a curation layer for the noise) or back to IRL in the case of dating (e.g. running clubs in NY or supermarkets in Spain). We also see this shift playing out beyond content and relationships. We believe that people have an innate desire to consume goods that others have put care into and that are unique. As this type of content becomes less present/prominent online (e.g., due to Gen AI), we expect to see a big shift towards people consuming physical goods and experiences that have this artisanal aspect, are unique or ephemeral, such as pottery, handmade clothing, leather goods, live concerts, etc. This is great for brands like Hermes, which have kept craft at the heart of their leather business. It's also great for live performing artists (and their ecosystem), local artisans, etc. 

Humans crave shared cultural experiences. Unexpected and rooted in whatever shared cultural fabric is left, these experiences must shatter the confirmatory AirSpace and transcend our personalized Bubbles. Achieving this in a repeatable fashion requires a deep understanding of the Zeitgeist and the ability to turn this inside out in unexpected ways that deeply resonate with a society's (or sub-groups) shared cultural fabric. 

Opportunity Areas
Substantial innovation will occur in the context of AI-enabled personalized experiences. We are excited about this opportunity and are looking for companies exploring envelope-pushing form factors and ideas that are borderline fringe today.

As the Airspace and the Bubbles continue drifting apart and becoming more homogeneous on the one hand and heterogeneous on the other end, there will be substantial value in creating these types of experiences in a repeatable fashion. Studios like MSCHF and A24 have done a great job of this.

July

Thesis

Intelligence-Enabled Marketplace

We see an exciting opportunity for AI-enabled marketplaces to emerge.

October 2024

Infrastructure Tech

Working Thesis

We see an exciting opportunity for AI-enabled marketplaces to emerge. While there are a lot of opportunities for AI to enhance marketplaces (see good NfX write-up here), we are especially interested in situations where AI-enabled processes are in a reinforcing interplay with data advantages that lead to a sustained higher value proposition (i.e., better matching) in the marketplace (see graph below).

As outlined above, there are two interconnected feedback loops at play: 

  1. Using LLMs and VLMs to collect the right proprietary data at scale (i.e., conduct interviews, ingest large documents, understand client requirements using document upload, etc.).

  2. Use fine-tuned LLMs/VLMs + other ML models to understand demand and supply better, identify actions that reduce uncertainty around matching probability (e.g., follow-up questions) and to carry these actions in service of enabling more cost-effective/higher-value matching.  

We expect that businesses creating sustained value in this space to meet the following criteria:

  1. LLMs, VLMs, and other models can perform tasks to an acceptable degree (i.e., they meet a bare minimum threshold) – both on the proprietary data collection and matching side.

  2. Large amounts of unstructured data and feedback loops are useful for fine-tuning models that directly unlock economic value.

  3. Nobody has collected data relevant for training/finetuning these models at scale as there has been no economic/technological incentive to do so.   

  4. There are ways to create highly frictionless form factors using 1.)  that allow users to interact with these platforms seamlessly and in highly personalized ways to collect large amounts of data. 

  5. Initial data and model advantages can be sustained and turned into lasting moats with little risk of second movers and other market participants (e.g., incumbents with large distribution) being able to catch up. 

We see opportunities in various areas, from HR to Traveling to healthcare provider (e.g., psychologist) matching. Especially in scenarios where a lack of information leads to low matching rates. A few cases:

Case Study 1: Staffing

Staffing is historically incredibly time-consuming, requiring a deep understanding of a candidate’s capabilities and the job requirement assessment. This is very hard to scale as quality assessment usually requires 1) reviewing materials, 2) conducting interviews to dig deeper + review these, and 3) feedback cycles to understand what type of candidates the demand side actually wants (stated vs. revealed preferences). This leads to many staffing marketplaces doing a bad job of vetting demand or being very expensive, with matching rates reflecting this. 

Let’s go through the criteria set up above to see whether a hiring marketplace is a good fit to become intelligent:

  1. LLMs can already review and synthesize vast amounts of unstructured data (e.g., CVs, websites). They are capable of doing the same with job requirements. They are also capable of performing job interviews to a somewhat satisfactory level. 

  2. Models and AI interviews can be finetuned based on desirable outcomes (e.g., matching of demand and supply), thereby adjusting their reviewing and interview capabilities. This can happen even in a customized way (e.g., custom), given that certain parties on the demand side are large enough to guarantee a certain “offtake.” Mercor wrote this on their blog:

  3. This part is not so clear in the context of staffing. For one, there are a plethora of existing and new AI-enabled hiring tools that use AI-supported video (e.g., HireVue), and existing staffing platforms (e.g., Upworks) are rolling out video interviews, too. It is unclear to what extent these platforms might or might not have large amounts of a combination of unstructured data with hiring matches that they can use to train better models. Also, by sheer scale and distribution, these platforms should be able to generate plenty of data easily. 

  4. In the segments of the economy where jobs are sought after, people are eager for the opportunity to be in the talent pool that is considered for specific jobs. In these cases, people are willing to share their data CVs and conduct AI interviews – especially if the process is smooth. Given that the demand side (aka the companies looking to hire from the talent pool) is reasonably attractive, the CAC associated with acquiring the supply and data (i.e., video interviews, CVs, etc.) should be fairly low. 

    As described above, while we don’t assume AI-based matchmaking to be perfect yet, we believe that AI can be used to support already increasingly efficient matching, enabling the development of a cash-flow-generating business model while data is collected and models improve.

  5. Given the dynamics described under 3, it is unclear whether an HR marketplace with an initial data advantage can sustain this advantage. What if existing platforms like Upwork roll out AI-based video interviews and start training their models? With their existing brand and supply, they should be able to generate more data than any startup substantially faster, leading to better models, etc. If not, what is a relevant quantity of data to establish a platform as the winner? Will general LLMs acquire the capabilities of finetuned models as they get better and context windows improve?

July

Thesis

AI-enabled PCB Automation

It is a recurring meta-theme that we think AI will have a great impact on the physical world.

September 2024

Industry Transformation

Working Thesis

It is a recurring meta-theme that we think AI will have a great impact on the physical world. At the same time, we are convinced that companies that innovate around business models and take ownership of certain processes will unlock a lot of value, maximizing the value capture associated with their technology. 

One area that has caught our attention in this context is AI-enabled PCB layouting. Printed Circuit Boards (PCBs) are the backbone of modern electronics, enabling a wide range of devices across various industries. In consumer electronics, PCBs power smartphones and smart home devices, enhancing our daily lives. The medical field relies on PCBs for critical equipment like MRI scanners and pacemakers, improving patient care. Automotive applications include engine control units and advanced driver assistance systems, making vehicles safer and more efficient. In aerospace and defense, PCBs are crucial for avionics and satellite communication. Industrial settings benefit from PCBs in robotics and automation systems, while telecommunications infrastructure depends on them for routers and cell towers. From the devices in our pockets to the satellites orbiting Earth, PCBs play an indispensable role in connecting and powering our technological world. As the complexity of end devices increases, so does the complexity of PCBs. 

The increasing complexity of PCB layouts makes the design more challenging due to higher component density and miniaturization, which require intricate placement strategies and precision routing. Managing multiple layers and implementing high-speed interfaces demand careful signal integrity analysis and tighter manufacturing tolerances. Integrating mixed technologies complicates the design process, requiring effective partitioning and thermal management. These factors necessitate advanced skills and sophisticated tools to ensure that designs meet performance and manufacturability requirements. That said, as shown in the table below (Source: Claude), the processes associated with correctly laying out a PCB already take around  50%+ of the total time of PCB development today. We expect this to increase due to the described complexity of PCBs to keep pace with the novel applications we need them for.  

It is our current assumption that increasing complexity will have a disproportionate impact on the effort and time it takes to create these layouts. Other than schematics, this seems to be a very straightforward task requiring little strategic context. A PCB layout either works or does not based on certain benchmarks, whereas schematics can be more ambiguous. We have seen significant progress in AI model development (especially reinforcement learning), which can automate and significantly accelerate parts of the PCB layout process.

The total number of PCB designers in the United States is 72,971, with an average salary of around 74k per year. This gives us a total salary of USD 5.4B for PCB designers. Automating a significant part (70+%) of their jobs offers considerable cost savings. Of course, this does not include any economic benefits associated with AI models accelerating the process and substantially reducing the number of hours. This is especially valuable at the higher end (e.g., aerospace, defense), where PCBs are highly complex and take orders of magnitude more time to design. Acceleration of parts on the critical path into production is likely precious and hard to quantify based on cost-saving numbers.

We have spent significant time thinking about the opportunities of AI-enabled outsourcing and services business and believe that PCB layouting provides the structural environment for such a model to emerge. 

  1. Objective benchmark assessments 

  2. Clear benefits to assuming responsibility for working output

We believe that the company capable of driving significant improvements here can build a large company with a wedge into a market that is otherwise hard to penetrate for software companies due to the dominance of Altium and others. 

July

World View

European Defense

A new era of strategic autonomy and societal resilience

August 2024

Industry Transformation

With war on its doorstep and American attention shifting to the Pacific, Europe has been forced to prioritize resilience and strategic autonomy for the first time in a century. While Russia’s invasion of Ukraine exposed Europe’s lack of preparedness and home-grown capabilities, the conflict has shifted the perception of European builders, investors, and policymakers on the importance (and ethics) of developing and deploying critical technology to foster sovereignty.

The result has been a groundswell of momentum aimed at transforming Europe’s defense-industrial base; protecting European values by deterring Russian aggression in the near term and building the long-term foundations to project strength amid rising Great Power Conflict. 

In recent years, change has occurred at all levels – from the EIB’s updated views on defense technology and the European Commission’s first-ever Defence Industrial Strategy to the rise of an actual defense technology ecosystem in Europe for the first time, catalyzed by the momentum of lighthouse companies like Helsing, ambitious efforts like the €1b NATO Innovation Fund, and grassroots organizations like the European Defense Investor Network.

But expanding markets, increased capital flows, and narrative momentum don’t always imply attractive forward-looking returns. 

Despite the market’s growth, inertia, fragmentation, and protectionism rule the European defense market. While European defense spending has returned to Cold War levels, the continent still lacks urgency relative to geopolitical allies and rivals. The conflict in Ukraine has done little to unite European perspectives on the what, how, and who of building next-generation defense capabilities. The EU’s two largest economic and military powers – Germany and France – remain fundamentally split on the role of Europe in its own defense. This philosophical gap threatens to worsen the severe physical fragmentation of European defense forces – Europe operates 5x the number of vehicle platforms than the US. At the same time, the UK has increasingly shifted attention away from the continent towards the AUKUS coalition. 

The US defense technology ecosystem, far more developed than Europe, inspires little confidence in what lies ahead. Through September of 2023, venture-backed companies were awarded less than 1% of the $411 billion in Defense Department contracts awarded in the government’s fiscal year – only a slightly larger share than in 2010 when few startups were building military technology. And while companies like Anduril have shown that the path to scale is possible, the company’s success may end up making it the new technology distribution chokepoint instead of a bellwether for a thriving new defense ecosystem. 

These factors present significant obstacles to building and scaling European defense technology companies. They may also present unique opportunities for a highly targeted investment approach in the space, aimed at turning the market’s weaknesses (e.g. fragmentation) into strengths and riding key catalysts that may help emerging companies overcome the inertia and sub-optimal industry structure.

Catalysts Creating Opportunity

To believe in the opportunity to invest in emerging European defense technology companies in the face of the incumbent market structure, we need to see significant technological, economic, social, and policy-related shifts that are, critically, available to emerging entrants and not incumbents.

Europe has been forced to prioritize resilience and strategic autonomy for the first time in a century. This has restructured the continent's procurement regimes, capital markets, and attitudes. The simple availability of early-stage capital for emerging defense and security companies in Europe cannot be overstated. With dozens of early-stage funds now focused exclusively or significantly on the space and later-stage investors slowly showing up, financing a defense technology company in Europe to the point of scale is now possible. As the EIB and other capital markets institutions continue to evolve their view, we expect many of the capital markets barriers to financing defense and security companies across the company life cycle will begin to fall away.  

Procurement remains a significant challenge, but the fragmentation of Europe creates opportunities for emerging companies to get to market faster by targeting smaller, potentially more agile countries more inclined to adopt new solutions. Greece, for example, now spends 4% of GDP to support a tech-forward defense strategy while countries close to the front in Ukraine have been forced to move quickly to adopt new solutions. 

The “primitives” for rapid, capital-efficient defense technology development have come online, making it possible for companies to ride multiple technological tailwinds to build solutions that meet the complex needs of government customers. Decreasing costs of hardware, enabled by advanced manufacturing, better (software) tooling, and the acceleration of physical-world foundation models make it possible for companies to develop complex defense technology systems at a fraction of the cost of incumbents. AI systems are already operating successfully in significant tests (like dogfighting with fighter pilots) and on the battlefield in Ukraine, which should drive more receptiveness from (risk-averse) buyers and users. 

Lighthouse companies and talent ecosystems are emerging around defense and national security-focused technology for the first time in Europe. The US defense technology ecosystem built momentum on the back of breakthrough companies like SpaceX and Palantir. The same pattern is playing out in Europe, with companies like Helsing and The Exploration Company forming the foundation for a thriving defense-industrial ecosystem in Munich. While less developed in terms of defense and space-focused technology, places like Stockholm (energy) nd Paris (AI) have become beacons for talent in areas adjacent to national security. Meanwhile, London has captured much of the early-stage energy and likely represents a strong ecosystem to launch a defense technology company thanks to its physical proximity to Europe and cultural proximity to the US.  

The Ukraine conflict has presented a unique opportunity for companies to develop proof points and revenue, creating a “backdoor” for future contracts with Western European governments. It has also highlighted the future of warfare. Rapid acquisition and deployment processes in Ukraine have helped companies generate real revenue and test systems in live situations. While larger Western European governments have been slower to respond, and more likely to simply drive business to existing primes, the proof points being developed by emerging companies should help their cases in (eventually) accessing larger, longer-term programs. Technologically, the predominance of electronic warfare has given a fundamental advantage to agile companies that can iterate rapidly to stay ahead of Russian competition. 

Key Insights

The following factors are the most significant in driving success for emerging European defense technology companies. These lessons are drawn from the company and our customer/expert interviews.

New defense companies are software-first, R&D-centric, and mission-driven. Incumbent defense contractors operate on a cost-plus business model, essentially building to the specifications laid out by government buyers and layering a “reasonable” margin on top (5% - 15%). As a result, large primes spend less than 3% of their budget on R&D and lack the incentive to innovate. On the other hand, companies like Anduril and Shield AI take on product development risk themselves and spend massively on R&D

And while the hardware these companies build tends to garner the most attention, the software and autonomy systems underlying the hardware make everything work. Anduril’s Lattice platform ties together all of the company’s hardware products, fusing sensor data and creating an autonomous operating picture. This software-defined operating model drives better margin structures (Anduril targets 40% gross margin vs. Lockheed and other primes under 20%), allowing them to continue fueling an R&D flywheel.

Fragmentation remains the most challenging aspect of European defense. It may also present the largest opportunity. Europe’s fragmentation challenge needs little additional explanation. There is not one unified military-industrial complex on the continent, there are 27. Each has a different view on supporting its own national champions, different relationships with EU member countries, and divergent views on buying from outside (usually the US). This has resulted in a complex web of disparate capabilities (weapons systems, vehicle platforms, and communication models) that limit rapid response and collaboration.

Understanding this, and realizing that it is likely beyond the reach of one company to solve from a hardware (let alone cultural) perspective, is key to uncovering where opportunities sit. Helsing, for example, has leveraged its positioning as a multi-domain AI backbone to build early leadership around this concept. As cheap drones, cameras, and other sensors proliferate, the opportunities to coordinate the complex data and operational picture, solving capability and collaboration gaps through more modularity and interoperability become larger.

Technology differentiation is table stakes. The most successful companies will possess a “secret” to navigating procurement. Despite the shroud of complexity surrounding defense procurement, success remains largely “champion-driven”, as Anduril CRO Matt Steckman recently remarked. Companies don’t win through better technology, they win by solving specific problems for people with influence in the buying process. Companies must simultaneously engage in long-term relationship building (including lobbying) to build trust with procurement influencers while developing relevant proof points in the field. One way of doing this, as Anduril demonstrated and emerging European players like Lambda Automata are attempting to replicate, is by viewing defense and security as a “conglomeration of micro markets” – which includes adjacent opportunity areas like public safety and border control.

Narrative momentum is highly rated but likely remains underrated by European founders. The traditional stereotypes of European vs. American founders seem to have played out in the early part of this new defense tech wave – from Anduril’s narrative mastery to the momentum of ecosystems like El Segundo to the sophisticated way some companies and investors have integrated themselves into the Washington decision-making systems. As in all markets, there is a reflexive nature to success in defense – the companies that win figure out how to craft a better story and more social proof to attract capital and talent in advance of fundamental traction. 

Distribution bottlenecks inherent in government and defense contracting are already contributing to market consolidation for emerging defense technology companies. Competing against defense primes means eventually competing in every domain they operate in. As software-first companies break through, the returns to scale and breadth might become even greater – platforms like Anduril’s Lattice get stronger as they consume more data and control more hardware assets in the field. Combined with the defense market’s natural bent towards consolidation, companies that can be “first to distribution” in a given area will be very hard to displace and will be strongly positioned to roll up interesting technology and talent, as Anduril has already started to do aggressively. (The sheer number of Anduril references in this document reflects its outsize and rapidly compounding success in this space!)

Emerging Investment Areas

There are several valuable defense market maps and landscapes worth evaluating to understand different ways of breaking up the market, perhaps the most comprehensive is this from Quiet Capital’s Michael Bloch: National Security and Defense Market.

To avoid rehashing those efforts, our focus has been on identifying emerging themes that span multiple segments of such maps, supported by converging market, technology, and geopolitical tailwinds. While not comprehensive, these themes align well with the catalysts and insights above and are where we have seen several of the most interesting companies in our review – the most interesting companies tend to touch multiple themes. 

Modularity and Interoperability → Leaning into the fragmented nature of European defense through solutions that aim to unite disparate operating systems and coordinate complex environments. While software capabilities will be the core connective tissue, hardware plays a big role as well. Cheaper, smaller, interoperable systems built to be easily adopted (both budget and technology-wise) can help accelerate initial deployment and provide companies with a platform from which to expand. 

Rapid Response → Building a more dynamic defense-industrial base by shortening time and cost to intervention across domains and operating areas. This ranges from faster kinetic capabilities (e.g. hypersonics and electronic warfare) to rapid manufacturing capabilities (e.g. Replicator) to faster deployment of machines and people (e.g. counter UAS swarms, labor coordination platforms) to systems that can be deployed (and as importantly, replaced) quickly. 

Multimodal Physical World Data and Intelligence → Wayve’s recent autonomous driving demonstrations showcased the speed at which multi-modal models are making their way into the physical domain. Along with the rapid decline of hardware costs, models that can reason more dynamically create interesting opportunities in defense, where operating environments are extremely fluid (i.e. not repetitive like pick and place, etc.) and thus pose problems for more rigid AI systems. Better simulation data will also continue to play an important role in preparing autonomous systems for live action. This represents a more horizontal theme and is thus something we might pursue a deeper dive into beyond defense.

Software for Hardware → The declining cost of hardware also creates room for better tooling, both at a collaboration/workflow level (i.e. Atlassian and GitHub for hardware builders) and at a design level (i.e. better CAD/EAD, “Figma for chips”, etc.). Fusion, a software platform developed and externalized by space launch company Stoke, highlights the need for better tooling to serve the hardware revolution. Enhanced IP and data security along with high levels of required precision for certain use cases may create specific opportunities in defense.  

Maritime Production, Security, and Infrastructure → Control over maritime infrastructure represents a significant geopolitical and economic advantage. Over the past decade, China has invested heavily in shipbuilding capacity. Today, a single shipyard in China has more production capacity than the entire US shipbuilding industry. However, the importance of maritime control goes beyond just shipbuilding. Undersea cables, for example, are the backbone of the global financial and communications systems. – over 95% of the world's communications are carried by a network of roughly 500 cables laid across the oceans. These represent critical vulnerabilities that need to be proactively protected through better surveillance, kinetic deterrence, and cybersecurity technologies. 

Combatting Digital Authoritarianism → Control of the digital economy is highly centralized, with cheaper data processing and engagement-centric business models (i.e. advertising) feeding the strength of a small number of powerful companies and institutions. This has led to democratic deterioration and a loss of trust in key institutions. It also creates a more straightforward surface area for attack and manipulation by adversaries – spanning consumer-focused influence campaigns to corporate IP theft. Technology that empowers sovereignty over assets and information, increases privacy, and enhances secure communication and collaboration represents a somewhat orthogonal, and bottom-up, approach to investing in defense and security as the go-to-market model may be dependent on large-scale government procurement. 

July

July

theme shape animation
Core Theme

Fundamental Industry Transformation

We pay close attention to categories where cutting-edge software (e.g., AI) is fundamentally changing and improving processes - often in the real world. This is a recurring theme across industries, and we invest in companies critical to bringing about fundamental industry transformations.

Partner

Jonathan Jasper

Jonathan is a General Partner at July Fund. Jonathan is based in New York City, USA.

New York City, USA

clear sky 19°C

Portfolio Founder

Hélène Huby

Hélène is CEO & Co-Founder at The Exploration Company. Helene is based in Munich, Germany.

News

A.I. Needs Copper. It Just Helped to Find Millions of Tons of It

07/12

The deposit in Zambia could generate billions, provide minerals for the energy transition, and help the United States secure critical supply.

New York City
Berlin

Jun 8, 2025
6:58

Jun 8, 2025
12:58

Portfolio Founder

David Ha

David Ha is CEO & Co-Founder at Sakana AI. David is based in Tokyo, Japan.

News

Sakana AI raises USD 30M

01/16

Sakana AI raises USD 30M to develop nature-inspired AI.

Portfolio Founder

Chrisman Frank

Chrisman is CEO & Co-Founder at Synthesis School. Chrisman is based in Los Angeles, USA.

Partner

Philipp Schindler

Philipp is Chief Business Officer at Google and a Founding Limited Partner at July Fund. Philipp is based in Mountain View, USA.

Portfolio Founder

Kurt House

Kurt House is CEO & Co-Founder at KoBold Metals. Kurt is based in San Francisco, USA.

News

The European Space Agency chooses The Exploration Company

05/22

Body awards contracts to two companies to develop cargo services to the International Space Station.

News

Mining Company Is Silicon Valley’s Newest Unicorn

06/20

KoBold has raised USD 195M from existing investors to accelerate our AI-backed search for critical minerals vital for preventing the most catastrophic impacts of climate change.

Research

Brett Bivens

Brett is Head of Research at July Fund. Brett is based in Annecy, France.

theme shape animation
Core Theme

Infrastructure Tech

We know that there are a few areas where fundamental and enduring infrastructure technology will be created and widely adopted in the coming years. We invest in companies that are building significant pieces of this infrastructure.

Thesis

Always On Agents and Information Networks

AI-enabled workflow and productivity optimization

May 2025

Infrastructure Tech

In modern knowledge work, progress often depends on critical paths. These are sequences of tasks that must be completed in order for work to move forward. In many industries where people work standard business hours, progress is frequently blocked simply because someone is not available. AI agents can step in to operate asynchronously, handling tasks, coordinating follow-ups, and moving projects forward outside of human working hours. This allows organizations to reduce latency and maintain momentum across time zones and schedules.

Beyond task automation, AI agents can also help people focus on what matters most. One of the biggest bottlenecks in large organizations is not execution, but prioritization. Agents can analyze activity across systems, identify what is urgent or strategic, and surface the highest-impact actions for individuals and teams. This ability to guide attention is just as important as the ability to take action. When agents help both complete work and clarify what should be done next, organizations become more responsive, more efficient, and better aligned.

As adoption grows, these agents begin to form a network of intelligence across systems and teams. The impact becomes even more powerful when this extends beyond a single company. In many industries, work involves coordination between multiple businesses. AI agents deployed across an industry can identify where workflows intersect and where mutual dependencies exist. While each company is focused on its own goals, there are often shared constraints or timing issues that affect everyone. Agents with visibility into these broader patterns can help coordinate across boundaries and find solutions that are optimal for the entire network. This level of cross-company coordination leads to smarter decisions, fewer delays, and more resilient operations.

We expect this transformation to play out across a range of industries, but it will begin where the conditions are most favorable. The first are industries where decisions need to move quickly across time zones and where critical paths are high-frequency and time-sensitive. The second are industries where each company interacts with a wide range of external stakeholders and depends on constant information exchange to function. Logistics is a clear example, but similar dynamics exist in sectors like finance, supply chain management, and global professional services. These are the environments where AI agents can immediately create value and where industry-wide coordination is both possible and valuable.

July

Thesis

Gen AI Design Platform

Creative flow.

April 2025

Infrastructure

We are witnessing the emergence of a new generation of creative tools built natively around generative AI. These tools are not simply extensions of existing software but represent a complete rethinking of how creative work is done. They allow artists, designers, and other creators to interact with generative models in more dynamic, intuitive, and flexible ways, optimizing for controllability.

Legacy platforms like Adobe face structural limitations in adapting to this new reality. Their products are built around manual workflows and rigid UX patterns. This often creates friction when trying to incorporate generative capabilities. In contrast, new platforms are creating environments specifically designed to harness the power of AI from the ground up. These new platforms can move faster, experiment more freely, and respond directly to evolving user behaviors.

The Case for a Model-Agnostic Platform

We believe the long-term opportunity lies in building a platform that is model-agnostic. Different models will continue to perform better at different types of tasks. Some may be optimized for photorealism, others for stylization and animation. Rather than rely on any one model, the winning platform will allow users to access many and use them like creative tools, choosing the right one depending on the desired outcome.

Even if model performance begins to converge over time, there will still be a strong need for a unified creative environment. This environment should provide consistency, control, and flexibility across all media types and allow creatives to produce high-quality work without constantly switching tools or formats.

Open Strategic Question: Who Aggregates the Users?

One of the most important open questions is whether the models themselves will be the first to aggregate large user bases. Platforms like MidJourney and Runway have shown that high-performing models can attract a lot of users early on. As these user bases grow, those same companies may then decide to build full creative platforms on top of their models, combining performance with workflow.

This presents a risk. If user aggregation happens at the model layer, new platform-first companies may find themselves competing not only with incumbents like Adobe but also with model providers that are quickly becoming vertically integrated.

However, we believe it is still early in the evolution of this space. Most creators are still in an exploratory phase. They are playing with tools, testing capabilities, and not yet fully committed to any one workflow or product. Adoption today is wide but shallow. There is still a real opportunity to build a platform that becomes the standard by offering both breadth of capability and depth of control.

Balancing Exploration and Execution

The platform that wins this space must meet two very different needs. First, it must support experimentation. Creatives need freedom to explore, test, remix, and iterate without friction and explore best use cases for gen AI as they evolve. Second, it must support execution. Professionals who need to deliver finished work need tools that are reliable, precise, and efficient.

This is a difficult balance to strike. Lean too far into experimentation, and the platform may feel like a toy. Focus too much on structure and workflows, and it may feel restrictive or uninspired. The right platform will create a continuum where users can move fluidly between play and production, depending on their goals.

Strategic Opportunity

We believe the future belongs to platforms that:

  1. Rethink creative workflows around generative systems rather than adapting old formats

  2. Provide powerful yet intuitive control layers over model outputs

  3. Integrate a wide variety of models and modalities in a seamless way

  4. Remain flexible and adaptable as model capabilities continue to evolve

  5. Serve both casual exploration and professional execution without compromise

This is not a feature update to existing software. It is the foundation of a new creative stack. The companies that recognize this shift and build with these principles in mind have the chance to define the next decade of digital creativity.



July

Thesis

Government GTM Enablement

Government procurement represents one of the world's largest markets yet remains trapped in antiquated processes controlled by middlemen and consultants.

April 2025

Industry Transformation

Government procurement represents one of the world's largest markets – $665B annually in the US and €2T in the EU – yet remains trapped in antiquated processes controlled by middlemen and consultants. The status quo disproportionately rewards incumbency over innovation, with procurement cycles measured in years rather than months. Only organizations with dedicated government relations teams can navigate the labyrinth of compliance requirements, shutting out smaller innovators and ultimately delivering suboptimal solutions to taxpayers. This inefficiency isn't merely a business problem but a governance crisis: critical capabilities in defense, infrastructure, and public services are delayed or never implemented due to market access friction.

Recent crises have demonstrated that procurement can move at unprecedented speed when existential pressure exists. Operation Warp Speed compressed vaccine development and procurement from years to months. Similarly, the Ukraine conflict catalyzed rapid defense procurement innovations that bypassed traditional bottlenecks. These examples prove that the system can change – what's missing are the tools to make such acceleration the norm rather than the exception.

As we write in many of our memos that touch on the public sector, there is unprecedented urgency for procurement modernization if Western societies want to meet the current geopolitical moment. Both policymakers and agency leaders recognize that national security and competitiveness depend on tapping into broader innovation ecosystems beyond traditional contractors. 

This alignment of incentives creates a unique moment for transformation and for the companies that can drive it.

The convergence of two powerful forces – outside-in technological innovation and inside-out governmental reform – has created a perfect storm for disruption:

Outside-In: AI & Infrastructure Revolution → LLMs and specialized AI models can now interpret complex government requirements, generate compliant proposals, and navigate the bureaucratic maze with unprecedented accuracy. These systems can reduce proposal creation from weeks to hours while increasing win rates through intelligence-driven targeting. This has dramatically reduced the cost of building procurement solutions capable of ingesting, analyzing, and acting upon the vast corpus of government contracting data – from forecasts to awards to performance metrics.

Inside-Out: Reform & Existential Urgency → Defense budgets are expanding dramatically amid growing global instability, with unprecedented allocations for emerging technologies. Simultaneously, government-led reindustrialization efforts and the imperative for resilient supply chains are creating entirely new procurement categories. Policymakers across Western democracies recognize that procurement modernization isn't just about efficiency—it's about ensuring critical innovations actually reach government users. The upside for the controversial DOGE program is not so much that it cuts costs but that it creates pathways for genuinely disruptive solutions to enter government, which can actually demonstrate their value. As agencies face mounting pressure to demonstrate innovation and supply chain security, they're increasingly receptive to new solutions for procurement enablement.

The most compelling opportunities in government GTM automation will come from companies that deliver comprehensive solutions rather than point products. We see four main archetypes emerging:

Proposal Intelligence Platforms (Horizontal and Vertical): Span the entire pre-award process with various entry points – some focus on opportunity discovery and pre-RFP intelligence, others on proposal generation, and still others on specific verticals like defense or healthcare. By automating the most labor-intensive aspects of government contracting while maintaining human oversight where needed, these platforms can deliver 10x efficiency improvements while maintaining quality. The most sophisticated solutions map the full ecosystem of agencies, programs, budgets, and stakeholders, enabling proactive engagement rather than reactive bidding. The critical differentiation will come from proprietary datasets and domain-specific AI fine-tuning that general-purpose LLMs cannot match.

AI-Powered Compliance and Deployment Infrastructure: The regulations governing government contracting represent perhaps the most complex body of business rules in existence. Platforms that can abstract away this complexity through automated verification, document generation, and approval workflows enable even small businesses to maintain perfect compliance without dedicated staff. The winners will build API-first architectures that can serve as middleware between any vendor and any procurement system.

Procurement Marketplaces & Networks: Two-sided marketplaces that connect government buyers with pre-vetted vendors represent a fourth category with powerful network effects. These platforms standardize procurement workflows, reduce friction for both sides, and create new distribution channels for vendors previously locked out of government sales. By integrating AI-powered matching algorithms, they can dramatically improve the quality of vendor-opportunity fit while ensuring compliance with diversity and locality requirements. The most successful will operate as true networks, not just catalogs, facilitating partnerships among vendors to address complex government needs.

We think companies founded post-2022 may have structural advantages in this market: they're built natively for the LLM era rather than grafting AI onto legacy systems, they're unconstrained by technical debt from previous government contracting paradigms, and they attract talent with both AI expertise and government domain knowledge.

The business model innovation is equally compelling. Where traditional government sales consultants and middlemen charge high fixed fees regardless of outcomes, AI-enabled platforms can adopt performance-based pricing models that align incentives with their customers. By directly tying compensation to contract wins or a percentage of award value, these platforms can capture a fair share of the massive value they create while dramatically lowering upfront costs for vendors. This approach is particularly transformative for smaller companies previously priced out of government opportunities, expanding the competitive landscape.

While established players will certainly compete, the velocity of innovation, ease of deployment, and business model flexibility favor new entrants who understand the unique intersection of technology and government mechanics.

The US federal market offers the most attractive initial target given its scale, homogeneity, and budget predictability. However, the platforms with lasting power will be architected from day one to address the fragmentation of global procurement systems, positioning them to expand internationally as they mature. The companies that succeed will not merely digitize existing processes but fundamentally reimagine how public and private sectors collaborate, unlocking trillions in market value while enabling more responsive, effective governance.

July

Thesis

Full Stack Edge AI

Enabling the physical world action layer

April 2025

Infrastructure Tech

The ability to sense, understand, and act upon physical world data in real-time is becoming fundamental to competitive advantage across industries. While cloud computing enabled the software revolution, the next wave of innovation requires computing to move closer to where data originates—whether in factories, vehicles, remote infrastructure, or medical devices.

Traditional cloud-centric architectures face fundamental limitations: latency that makes real-time decisions impossible, bandwidth constraints that make processing increasingly rich sensor data costly, and privacy concerns that restrict what data can leave local environments. As the speed of machine reasoning and machine to machine communication increase (and as more intelligent/autonomous machines come online), this problem will become even more acute.

Edge AI—deploying full-stack intelligent computing in complex, contested, or remote environments—directly addresses these limitations. By processing data where it's generated, Edge AI enables real-time insights even with limited connectivity. Early adopters in manufacturing, logistics, defense, and energy are beginning to demonstrate that Edge AI can be far more than just IoT – which has largely failed to live up to expectations. We see several catalysts creating an inflection point for Edge AI adoption and impact. 

  • Hardware Innovation: AI-specific chips and accelerators have made high-performance computing viable in small, power-efficient form factors. What once required a data center can now run on a device the size of a credit card.

  • Data Gravity: The explosion of sensors is generating unprecedented amounts of physical world data. Moving this data to centralized clouds is becoming unsustainable.

  • Regulatory Pressure: Privacy and data sovereignty requirements increasingly mandate local processing, particularly for data like healthcare records or surveillance footage.

  • Reindustrialization Momentum: Manufacturing and supply chain resilience initiatives are driving investment in intelligent, autonomous systems that require edge processing.

We see several compelling opportunities for emerging companies: 

  • Full-Stack Edge Infrastructure: Building the "AWS of the Edge" for remote and harsh environments—standardized, deployable compute and connectivity solutions that bring cloud-like capabilities anywhere.

  • Edge Foundation Models: Small models capable of enabling, as one company puts it, “ChatGPT like experience without requiring internet”.

  • Edge Orchestration Platforms: Software that enables enterprises to manage and scale thousands of distributed edge nodes, solving the operational complexity that stalls adoption.

  • Edge Security & Privacy: Tools and platforms that ensure edge deployments are secure and compliant, particularly critical for regulated industries and sensitive applications.

We expect the most successful companies in this category to eventually span multiple capability areas highlighted above.

However, our initial hypothesis on where the most value will accrue pushes us to look in a slightly different direction. We are biased (and informed) by investments in companies like KoBold and Gecko (as well as businesses like Helsing) which have led us to believe that companies capable of building end-to–end, vertical-specific action layers – i.e. integrating edge AI into specific industry workflows (e.g., mining operations, hospital systems, defense operations) – will be the biggest beneficiaries of this momentum. 

Trust, domain expertise, and distribution along with business model optionality (i.e. the possibility of going full stack as KoBold has done) should help these types of companies escape the commoditization. 

Physical world data is a key pillar of our investment approach as it relates to both infrastructure and fundamental industry transformation – Edge AI represents a fundamental shift in how physical world data is harnessed, enabling the creation of truly intelligent systems that can operate autonomously and adaptively in real-world environments. The companies that successfully bridge the gap between sensing and action – becoming the systems of intelligence for their industries – will capture enormous value in this transition.

July

Thesis

Supply Chain "Virtual" Integration

Building the command centers for weaponized supply chains in an uncertain world

March 2025

"Your supply chain is your weapon". This statement is increasingly true for companies battling trade uncertainty and governments navigating rising geopolitical tensions. Yet most critical supply chains are hampered – and placed at risk – by siloed data systems, manual coordination, and a fundamental disconnect between visibility and action.

Large companies might understand their direct suppliers, but have limited visibility beyond that—leaving them scrambling when tariffs suddenly change the economic equation for key components or suppliers deep in their supply base are sanctioned without warning. Without proactive intelligence and execution capabilities, organizations are perpetually in reactive mode, lacking the “responsive tooling” needed to anticipate disruptions or rapidly reorient end-to-end operations when they occur.

Governments face similar challenges, often lacking insight into the capabilities of their industrial base. For example, the average German defense RFP receives just two bids, and 40% of all German public bids end up with only one bidder, largely because governments can't proactively identify suitable suppliers. This not only limits competition and innovation but also creates significant vulnerabilities around national security, energy availability, and healthcare access when rapid mobilization is needed.

Traditional vertical integration offered a solution but often proved impractical in a highly competitive, globalized economy. This, in turn, led to the “death of the industrial conglomerate” toward the end of the last decade as it became clear such operating models lacked the right feedback loops with the market.

Today, we believe a new paradigm is emerging: the “virtual integration” of supply chains, where AI paired with distributed sensing, production, and logistics capabilities, coordinate independent partners as seamlessly as if they were one unified operation.

Two major forces are creating a unique “why now” moment for new companies to emerge: 

  • Geopolitical and Regulatory Imperatives: Trade wars, export controls, and the push for supply chain sovereignty are forcing companies to completely reconfigure their value networks. At the same time, new regulations around forced labor, carbon emissions, and transparency are making deep visibility and control mandatory. In the public sector, governments (particularly in Europe) now recognize that industrial capability is fundamental to national security, with the lack of tools to map capabilities and mobilize suppliers creating critical vulnerabilities. These concurrent shifts demand execution platforms that can dynamically adapt networks, ensure compliance, and enhance national resilience.

  • AI and Edge Intelligence: LLMs can process massive amounts of unstructured data – from customs declarations to technical specifications to satellite imagery – making previously dark, inaccessible information actionable. Meanwhile, better asset tracking via sensor networks and edge computing is enabling companies to build "virtual redundancy" instead of physical redundancy. Platforms use AI to create self-correcting systems that continuously adjust supply and demand, enabling companies to operate leaner while maintaining resilience. By dynamically shifting resources based on real-time conditions, these systems effectively create redundancy through intelligence rather than excess inventory.

As a result of these catalysts, we have seen the market evolve in two phases:

Phase I → Insight-Layer Companies – Horizontally-focused companies aimed primarily at visibility and planning by providing information and analytics to aid decisions. They aggregate data and highlight issues like compliance violations, supply disruptions, or market shifts, but generally stop short of executing decisions. While valuable, customers increasingly find these horizontal insights insufficient, and integrating one-size-fits-all platforms with highly specific use cases can be cumbersome. What companies need isn't just awareness of problems but solutions to the.

Phase II → Action-Layer Platforms – Vertically-focused operating systems that span digital and physical capabilities and that autonomously plan, adjust, and execute supply chain tasks in real-time. These platforms integrate into procurement, production, and logistics processes, becoming the end-to-end execution system for specific domains:

  • Electronic Components: Platforms that connect engineering design through procurement and logistics, managing supplier selection, automating purchase orders, and ensuring compliance.

  • Specialty Chemicals: Virtual contract manufacturing solutions that connect customers with production capacity, handling everything from formulation to production and distribution.

  • Construction Materials: Platforms that optimize the fragmented and highly regional construction materials supply chain to improve material sourcing efficiency and enable traceability for sustainability certifications and carbon accounting.

  • Defense Industrial Base: Solutions that map capabilities, connect prime contractors with innovative suppliers, and provide governments with greater visibility into their industrial ecosystems while remaining resilient to emerging threats to key infrastructure and supply lines (e.g. electronic warfare).

  • Healthcare: Platforms that actively manage inventory across distributed locations, predict demand surges, and automatically rebalance supplies during crises.

We believe execution-focused platforms offer a particularly compelling investment case for several reasons:

  1. Higher Stickiness: Systems that run daily operations become mission-critical infrastructure with strong retention and pricing power.

  2. Direct Value Capture: By controlling actual transactions or interactions, these platforms can adopt value-based pricing tied directly to volume or outcomes.

  3. Strong Operational Moats: Execution platforms accumulate proprietary data and build valuable supplier networks that become increasingly difficult to replicate.

  4. Market Demand for Action: Corporate and government buyers increasingly need solutions that not only identify problems but solve them.

Supply chain virtual integration represents an opportunity to fundamentally transform how physical goods move through the world. The most successful companies will be those that move beyond visibility to actual execution, building defensibility through proprietary data, network effects, and deep workflow integration.
In a world where supply chains are weaponized, these virtual integration platforms are emerging as the command and control systems for the global economy – helping organizations navigate unprecedented complexity with greater agility, transparency, and resilience.

July

Thesis

AI-Enabled Regulatory Compliance

Turning document-heavy, manual processes into an automated, continuous function.

March 2025

Over the past decade, regulatory compliance has transformed from a back-office function into a strategic imperative. Shifting trade agreements, supply chains, and geopolitical alliances create real-time regulatory threats that need to move at the speed of software and AI. With global regulatory changes exceeding 200 per day and enforcement action intensifying, companies need more efficient approaches.

AI is emerging as the key enabler of this transformation, turning document-heavy, manual processes into an automated, continuous function. As the regulatory landscape expands beyond traditional domains into AI governance, ESG reporting, and data privacy, LLMs and advanced NLP can now parse complex regulatory text, automate monitoring of everything from communications to physical assets (effectively building a regulatory-focused digital twin of a global organization), and generate compliance content and actions at scale.

Companies that adopt such tools gain a dual advantage: protecting downside risk while using regulatory shifts to identify and exploit market opportunities faster than competitors. This transforms compliance from a defensive function into a strategic capability for navigating complex global markets. 

The most promising companies building AI for regulation-related use cases make this dual benefit a reality and share the insight that while solving compliance-related challenges is valuable, the long-term opportunity rests on the idea that regulatory compliance increasingly touches every aspect of an organization. By integrating deeply with core business systems and workflows, AI compliance tools create natural expansion opportunities extending beyond their initial wedge.

For example, a company that starts by automating regulatory change notifications can expand into policy creation, control testing, and risk management – moving from insight to the automated action layer for regulatory compliance and organizational governance. As these platforms ingest more data, they create network effects through anonymized benchmarking and best practices. In some cases, companies will play a role in actually shaping regulation, surfacing data and insights to lawmakers, and using AI to encode changes to make compliance more efficient. 

We observe distinct regional dynamics in this market, creating the opportunity for regional champions to emerge (much like in fintech). European companies, for example benefit from the EU's aggressive regulatory agenda, with deep knowledge of EU frameworks creating defensible regional positions. 

Our assumption is that the nature of the customer profile places a premium on clear, rapid ROI for specific compliance activities (e.g., 50% reduction in audit time or 90% faster regulatory analysis). This necessitates deep domain expertise and an integration-first architecture. We also see this as a category that will rapidly move toward consolidation. A small number of regional players will quickly gain momentum, while followers will be quickly acquired by incumbents.

We see this small subset of winners emerging as essential infrastructure for modern business – solutions that transform compliance from a cost center into a strategic advantage deeply embedded in how organizations operate.

July

Thesis

Commodity Intelligence

Opportunities at the intersection of AI, supply chain disruption, and shifting trade alliances

February 2025

Infrastructure Tech

The rules of global trade are being rewritten (some would say fundamentally erased) in real-time. Escalating geopolitical tensions, shifting alliance structures, and weaponized trade policy are creating a new regime of persistent instability in commodity markets. Our working assumption is that his volatility isn't just a temporary disruption – it will persist. 

As a result, companies across the economy have been forced to reassess their relationships with commodity markets. 

Consumer goods manufacturers need better ways to manage agricultural and packaging inputs. Industrial companies are seeking more control over metals and energy exposure – from steel mills hedging ore to electronics manufacturers securing rare earths to automakers managing access to battery metals. Transportation companies must navigate volatile fuel markets. Energy producers want to capture more value from their physical assets. 

What was once a back-office function built on just-in-time procurement and undifferentiated supplier relationships has become a critical source of competitive advantage (or weakness).

As a result, companies that once outsourced market participation to traditional trading houses are now more actively developing in-house capabilities to manage their commodity exposure. 

This strategic reorientation parallels what we're seeing across the physical economy: as the real world becomes more measurable and markets more dynamic (i.e. our “Shortification of Everything” thesis), the gap between leaders and laggards will widen dramatically. Companies that build sophisticated capabilities to understand and act on physical world data will create powerful feedback loops – better data driving better decisions driving better data. Those that don't will face an increasingly insurmountable disadvantage as AI and automation accelerate the pace of market adaptation.

The emerging stack of technology solutions enabling this transition mirrors our Benchmarking the Physical World thesis. Just as standardized data and metrics enabled the development of modern financial markets, new platforms for understanding and pricing physical world risk are becoming critical infrastructure for commodity market participants.

Within this landscape, we can roughly divide the companies into a few areas, recognizing that there is significant overlap (and will be more as breakout companies expand) and that each industry has its inherent differences.

  • Trade Intelligence Platforms: Companies building proprietary data collection and analytics infrastructure to decode physical commodity flows. Winners start by aggregating hard-to-replicate datasets before layering on predictive analytics and trading signals. The sequencing opportunity lies in building deep, defensible information advantages in specific verticals to unlock expansion across commodity classes and deeper into the value stack.

  • Digital Marketplaces & Trade Execution Platforms: Platforms that modernize how physical commodities are traded, bringing efficiency, liquidity, and transparency to markets that traditionally rely on brokers, OTC transactions, or relationship-driven trading. These companies start by digitizing specific workflows (such as matching buyers and sellers, trade documentation, and contract negotiation) before expanding into value-added services like financing, logistics, and risk management.

  • Trading Capability Infrastructure (“Embedded Trading”): Software and AI-powered platforms that enable companies to build sophisticated trading operations from scratch. As more companies seek to capture trading margins around their physical assets, winners will combine deep vertical expertise with flexible technology that helps clients identify, execute, and optimize trading strategies while managing risk across their operations.

While financial markets have been transformed by algorithmic trading and real-time data, physical trading still operates largely through manual processes and personal relationships. The winners in this category will combine deep domain expertise with sophisticated technology for processing multi-modal physical world data. Those that succeed will become the foundational infrastructure for pricing and allocating physical resources in an increasingly complex global economy.

July

Thesis

Data-Driven Fashion

OrtegAI

February 2025

Digital Transformation

The fashion industry operates on a fundamental tension between creative vision and market reality, with traditional design-to-retail cycles spanning months and requiring significant upfront capital commitments before knowing if designs will resonate with consumers. Despite the rise of "fast fashion" players like Zara and H&M who compressed this timeline to weeks, the industry still faces persistent challenges of overproduction, waste, and missed market opportunities.

We believe an AI-driven approach represents a paradigm shift in how fashion products move from concept to customer. Unlike other design fields that require complex engineering validation, fashion design outputs can be rapidly translated from compelling visuals to manufacturable products, creating a unique opportunity for end-to-end AI transformation.

This transformation will unfold across three interconnected domains:

  1. Constraint-Aware Generative Design: AI systems that can help designers go from sketch/idea to full-fledged design visuals faster while considering the supply chain and cost constraints inherent to said designs.

  2. Pre-Production Analytics and Validation: Using AI-generated renders and photography as market sensing tools before committing to production. This includes digital showrooming, social media testing, and other data-gathering approaches to validate demand before manufacturing begins.

  3. Supply Chain Integration and Rapid Production: Direct connection between validated designs and manufacturing partners, with automated translation of designs into production-ready specifications for quick turnaround.

The convergence of these capabilities creates the foundation for a new class of fashion business that can dramatically reduce time-to-market, minimize inventory risk, and better align production with actual demand.

Note: This approach builds on innovations pioneered by Ortega (Zara) in traditional fast fashion, but supercharges them with AI to achieve unprecedented speed and precision in market response.

Technology Enablers

Several technology breakthroughs make this vision increasingly feasible:

  • Advances in generative AI for fashion design now produce photorealistic renderings that can accurately visualize garments in various styles, materials, and on different body types.

  • Machine learning systems can now incorporate manufacturing constraints directly into the generative process, ensuring that designs respect material characteristics, production capabilities, and cost targets.

  • Digital twin technology enables virtual sampling and fitting without physical prototyping, compressing development timelines.

  • Supply chain digitization and API-first manufacturing partners create the potential for seamless handoffs from design to production.

Market Evolution

We see three approaches emerging to leverage these technologies:

  1. AI Design Accelerators: Tools that significantly speed up the design process while respecting manufacturability constraints. Early leaders will focus on specific categories where design-to-production translation is most straightforward. 

  2. Supply Chain Orchestrators: Platforms that manage the production side. From real-time information on production availability, prices, etc., to automated production and logistics.

  3. Full-stack platforms include everything to either launch there own brands or enable others. We believe that these could increasingly empower new types of brands (e.g., creators) with minor technical design background and potentially even enable users to create n-of-1 pieces. 

Key Success Factors

We believe successful companies in this space will demonstrate:

  1. Constraint Engineering Excellence – The ability to translate manufacturing realities into parameters that guide AI design generation is a core differentiator.

  2. Supply Chain Integration – Deep relationships with manufacturers who can rapidly produce small runs of validated designs will be essential to realizing the full value proposition.

  3. Brand Position – While the technology enables rapid iteration, successful companies will still need a coherent brand identity to guide design direction and customer expectations.

The most compelling opportunity may be in platform companies that can build the infrastructure connecting AI design capabilities, consumer testing channels, and manufacturing partners. These platforms could either operate their own fashion brands or provide technology to existing brands and retailers, potentially creating network effects as they accumulate design, consumer preference, and manufacturing data. As with the evolution of fast fashion in previous decades, we expect the first breakout company in this space to combine technological innovation with strong operational execution and distinctive brand positioning.


July

Thesis

“Plumbers” of the Reindustrial Revolution

Like traditional plumbers, these companies are focused on high-stakes problems where failure carries outsized consequences.

January 2025

Industry Transformation

While neo-Primes and OEMs capture headlines and venture capital flows, specialized players solving critical service, infrastructure, and component-level problems will be fundamental to transforming the physical economy. 

We call these businesses the "Plumbers" of the Reindustrial Revolution because, like their namesakes, they occupy an unglamorous but essential (and hard to dislodge) position in their value chains. These companies are modernizing playbooks pioneered by industrial giants: Westinghouse in critical components, Bureau Veritas in trust and data, Schlumberger in technical services, and Grainger in supply chain orchestration. 

Like traditional plumbers, these companies are focused on high-stakes problems where failure carries outsized consequences. Their businesses are built first on technical mastery and reliable execution, which fosters deep customer trust and loyalty. Competition remains limited not just because of technical complexity, but through the “niche” nature of their markets – rational actors won't deploy massive capital to displace established players in constrained categories like they might in unbounded markets. This creates a foundation for expansion into adjacent opportunity areas – deepening existing customer relationships or extending technical capabilities to expand TAM over time. 

A key theme across much of our research is how geopolitical competition is redrawing supply lines and catalyzing efforts to rebuild industrial capacity in Western markets. The existential threat motivating this has been a potential conflict with China. But even in a positive scenario where kinetic conflict is avoided – and even as “expected unexpected” events like DeepSeek’s R1 impact the Capex and energy equations – we (and others) believe the trend towards spending on reindustrialization will continue. 

Thus far, the narrative surrounding the Reindustrialization tailwind has primarily benefited companies at the "front end" – next-gen OEMs, new Primes, and companies building brands easily understood by generalist investors that control most of the growth capital in the ecosystem. This is reflected in valuations – the early champions have access to near-unlimited pools of cheap growth capital while earlier-stage players are priced at levels that assume near-perfect execution. While we share the market-level excitement about the new levels of scale the best companies in this market can achieve, we have been more circumspect in our approach to this category.

As competition continues to rise on the front end of the market, our hypothesis is that the most attractive risk-return opportunities will increasingly be found with the "plumbers”, which we see emerging across four primary categories:

Critical Components

Then → Westinghouse's air brake system, invented in 1869 as railway networks reached continental scale, transformed railroad safety and became an industry standard, which created the foundation for one of the largest industrial conglomerates of the 20th century.

Now → The new material, form factor, and communication requirements of modern aerospace and defense systems create opportunities for specialized component makers to become standards in critical subsystems, from wire harnesses to thermal management to energy storage. 

Trust & Data Engines

Then → Bureau Veritas built a global franchise by becoming the trusted verifier of maritime safety standards as international trade expanded rapidly in the 19th century

Now → The confluence of aging existing infrastructure and the need for new development creates opportunity at the intersection of novel inspection technology and data analytics to become the system of record intelligence for asset health, compliance, and built world capital allocation. 

Superdevelopers

Then → Schlumberger became indispensable by mastering the technical complexity of oil exploration and production when the petroleum industry was rapidly expanding into new geographies

Now → The energy transition as well as the emergence of “new Prime frontiers” (e.g. the Arctic and space) creates opportunities for companies that can i) develop proprietary technology suited for challenging environments, ii) develop project execution capabilities to integrate other solutions, and iii) master the regulatory complexity of operating in new areas. 

Supply Chain Orchestration

Then → Grainger was founded in the 1920’s to provide customers with consistent access to motors as both the consumer and industrial markets for automotive and other powered machinery exploded.

Now → Electrification and UAV growth are driving demand for components like batteries, which are largely controlled by China and at increasing risk of tariffs and blockades. This creates new opportunities to build marketplace infrastructure for “democratic supply chains” and better supply chain coordination

Across these different pathways, we think successful companies will share several characteristics:

  1. Natural capital efficiency and organic growth – Sharper focus avoids growth at all capital strategy and expansion plans, fostering a more sustainable model for sequencing market opportunities.

  2. Rational competitive landscape – Perceived (initial) market sizes typically don't justify massive capital deployment by new entrants or existing players, while technical expertise and regulatory requirements create genuine barriers and, in some cases, help companies aggregate a portfolio of “sub-scale monopolies”.

  3. Value accrues to expertise (i.e. Process Power) – Deep knowledge of specific systems, regulations, or technical requirements becomes more valuable as complexity increases and companies either work across a broader segment of the overall value chain or integrate deeper into customer operations. 


1. The EDA market is one of the best examples of this. Companies like Cadence and Synopsys are both ~ $80b and relatively insulated from competition b/c their TAM (as a % of the overall semiconductor market) and their cost (as a % of overall the overall semi conduction design and fabrication process) is small. From NZS Capital:

“As they're successful, they're able to layer on these new businesses that are really additive to the overall business. So they may not even be increasing in price, in a lot of cases, just selling more functionality, because chip designers need it. And it's a really important point to underscore that we're talking about this 550 billion TAM of semiconductors, and the TAM of devices on top of that is another step function. It's being enabled by this sort of 10 billion EDA TAM. It's really small, when you think about what they're delivering.”

“But the idea that more EDA could come in-house over time, it just seems really unlikely to me, in part, because it's just not a huge pain point for the customer. It's 2% of their sales, and they just get so much value for what they're giving, versus the effort to re-engineer all this stuff that's been created over the last few decades.”

2. Much like last decade where being the Uber or Airbnb for X was an unlock for high-priced early financing, the same is true today of companies promising to become the Anduril or Palantir for X.

3. This relates to our thinking on AI-enabled asset ownership/buyout opportunities.

July

Thesis

Transforming Clinical Trials

How can we massively speed up the timeline – and reduce the cost – of bringing new drugs to market?

January 2025

While the interplay of AI and better data is (finally) beginning to deliver on the potential of dramatically expanding the therapeutic opportunity space, these breakthroughs risk being stranded or significantly delayed without a transformation of the clinical trial process.

We believe several factors have converged to create an exciting ‘why now’ for new companies building new clinical trial infrastructre.  

  1. The post-COVID regulatory environment and evolved operating procedures have created a unique window for reimagining clinical trials. 

  2. Remote monitoring, decentralized trials, and real-world evidence have moved from fringe concepts to validated approaches.

  3. The explosion in AI-discovered therapeutic candidates is creating pressure to modernize trial infrastructure for both human health and economic reasons – it is estimated that the cost of clinical trial delays can be on the order of millions of dollars per day

Our initial hypothesis is that winning companies will possess the following characteristics.  

  1. Vertically integrated, building parallel infrastructure instead of patching the existing system. The complexity and interconnectedness of clinical trials mean that point solutions will struggle to drive meaningful change. For n-of-1 companies to exist in this space they need control over the full stack – from patient recruitment through data collection and analysis. This approach is about more than technological self-determination. It also positions companies to innovate on the financial model of clinical trials towards better alignment among all of the key stakeholders (i.e. risk/upside sharing).

  2. AI (and post-COVID) native, designing their processes around modern capabilities rather than retrofitting them onto legacy approaches. This means leveraging AI for everything from protocol design to real-time monitoring while embracing decentralized/hybrid trials and remote data collection as first principles rather than accommodations.

  3. Built to capture the growth of AI-driven drug discovery (i.e. new companies) rather than competing for share in the traditional clinical trial market. This allows them to sidestep entrenched competitors to work with customers operating with the same true north of speed and technical advancement.

July

Thesis

Off-Road Autonomy

Reversing this physical world stagnation represents one of the largest economic opportunities of the coming decades.

January 2025

Infrastructure Tech

The Western infrastructure crisis is about more than aging bridges and roads (and elevators) – it's about our capacity to build, maintain, and modernize the physical systems that underpin productivity, economic growth, and strategic sovereignty. From critical mineral extraction for the energy transition to military logistics modernization to the massive manufacturing capacity needed to achieve reshoring objectives, we face unprecedented demands on systems that have seen little innovation in decades.

Reversing this physical world stagnation represents one of the largest economic opportunities of the coming decades. This is reflected in our work from several angles – most notably our investments in KoBold and Gecko, and through category research into energy infrastructure, sustainable construction, and defense.

It is easy to blame this stagnation on a lack of investment or an absence of vision among industrial (and bureaucratic) operators. But these are symptoms of the fact that physical world modernization – both digitization and automation – is not a monolith and a vast majority of the work that needs to be done is a fundamentally harder problem than commonly understood.

The environments where we have most significantly slowed down and thus where we most need automation – sectors like construction and defense as well as settings like logistics yards – are characterized by high situational diversity: dynamic conditions, variable tasks, and diverse equipment fleets that often stay in service for decades. While continuous process industries like chemicals and manufacturing have made significant strides in automation, these high-diversity environments have remained stubbornly resistant to transformation.

Automating heavy industrial vehicles – earthmovers, mining equipment, military Humvees – represents an important step to mastering these environments and fundamentally transforming the productivity equation in these industries. While much of the discussion around physical world automation has centered on robotics or on-road consumer autonomy (Waymo, Tesla, etc.), these vehicles sit at the intersection by unlocking both autonomous mobility and task execution/manipulation capabilities. They are the workhorses of our industrial system, will continue to be for a long time, and are just now starting to become equipped for autonomous operation. 

"Today you have a few thousand [autonomous] vehicles in mining, you have a few hundred vehicles in ag, you have dozens of vehicles in other verticals. I think we're really at the starting line now. Ag, for example, is nearly 3 million tractors. Obviously only a small percentage of those are big enough or productive enough to be automated. In construction equipment there's a million plus units. You look at something like mining, there's something like 60,000 dump trucks. So those are your upper bounds. But today the biggest successes are in mining where you've got north of a thousand units deployed, which, when you compare to on-road, is in a similar realm." – Sam Abidi, Apex Advisors

Technology Tipping Points → Our robotics research leads us to believe that the category is approaching (or reaching) technological tipping points on several fronts. While on-road autonomy has focused on well-marked roads and predictable conditions, industrial autonomy faces fundamentally different challenges. These environments demand systems that can handle unstructured terrain, weather variations, and complex interactions between vehicles, machines, and humans.

Several technological advances are converging to finally make this possible: 

  • Visual language models (VLAMs) and advanced perception systems that can understand both geometric and semantic elements of complex environments

  • Mapless localization capabilities that enable adaptation to rapidly changing conditions without relying on pre-existing maps

  • Improved sensor fusion that can differentiate between traversable elements (like foliage) and true obstacles while understanding surface characteristics

  • Edge computing architectures designed specifically for ruggedized, industrial deployment

  • Robotic hardware improvements (e.g. dexterity) that can be incorporated into autonomous systems to unlock end-to-end operational capacity.

Talent and Capital Momentum → Along with the technological building blocks for this category, the talent seeds were planted over the last decade as capital and big visions fueled the first wave of autonomous vehicle company building. Frustrated by autonomy regulation and other bottlenecks, founders and engineers started to look for opportunity areas where product roadmaps – and commercial models – could be realized in 2-3 years rather than a decade. This led many to off-road autonomy – despite the much smaller TAM – and has led to a flurry of company formation and funding in the space. 

Investibility – From Action Layer to Systems of Collaborative Intelligence → Building on our thesis in vertical robotics, we see retrofit industrial vehicle autonomy as a powerful near-term lever for modernizing infrastructure. The economics are compelling: retrofit solutions can deliver substantial cost savings versus new autonomous vehicle purchases while allowing customers to preserve their existing fleet investments, which often have 15-20+ year lifespans.

We see a clear sequence for how companies build defensible positions in this category:

1. Action layer as a go-to-market wedge:

  • Target 80-90% automation of common tasks while preserving human oversight

  • Lead with collaborative service model combining autonomy systems with expert support

  • Focus on high-ROI use cases where service model can support end-to-end execution

2. Systems of Record

  • Proprietary datasets around vehicle performance, environmental conditions, and task completion

  • Fleet management and analytics capabilities that span multiple vehicle types/brands

  • Data-driven maintenance and operations optimization

3. Systems of Collaborative Intelligence

  • Coordination and resource planning across operators, vehicles, and robotic systems

  • Serve as integration layer for next-generation capabilities, whether built internally or via partners

  • Consider deeper integration (going beyond retrofitting) to increase system-level advantages


This follows the progression we expect to see Gecko take in the data-driven (and increasingly automated) inspection market and is being proven out now by off-road autonomy companies like Outrider, which has expanded from electric yard trucks using a patented robotic arm to a full suite of site infrastructure and logistics operations management systems. It is worth noting that we believe this same sequencing may not hold when selling to militaries who tend to be more concerned about vendor lock-in and thus less receptive to “operating system” style offerings. 

Still, we believe companies operating purely at the "action layer" will have limited long-term defensibility and will require companies to uplevel their capabilities over time. The path forward also likely includes hybrid models – as evidenced by Caterpillar and Teleo's approach of using remote operation as a bridge to full autonomy, allowing skilled operators to work from anywhere while systematically identifying repetitive tasks suitable for automation.

This progression allows companies to build trust through immediate value delivery while laying the foundation for deeper workflow transformation. The key is maintaining the flexibility to evolve alongside customer needs and technological capabilities rather than forcing premature standardization.

We are particularly interested in companies targeting:

  • Heavy industrial operations (construction, or mining, and agriculture depending on use case) where environmental variability is high but equipment standardization is low.

  • Military and defense logistics, which require operation across diverse terrain with mixed vehicle fleets.

  • Port and industrial yard operations, where dynamic routing and complex interactions between machines and humans are the norm.

This thesis faces two primary risks. First, a breakthrough in robotics foundation models could make the retrofit/incremental approach less compelling, though our discussions with leading robotics companies suggest they are not underwriting dramatic commercial-level breakthroughs on even a ~5-year horizon. Second, growing concerns about AI's impact on employment could spark regulatory pushback, though acute labor shortages in these industries create powerful countervailing forces.

Overall, we believe the combination of sensing, decision-making, and physical execution in high environments represents an attractive wedge to become industrial operating systems in several categories.

July

Thesis

Personal Security

The traditional concept of security, once firmly rooted in the domain of the state, is undergoing a significant transformation.

January 2025

Fundamental Consumer

The traditional concept of security, once firmly rooted in the domain of the state, is undergoing a significant transformation. Individuals are increasingly taking responsibility for their own safety and well-being, driven by a confluence of factors, including rising crime rates, the proliferation of cyber threats, and a growing awareness of the limitations of state-provided security in the digital domain. This shift is particularly evident in the digital realm, where the rise of sophisticated AI-powered scams and the increased abundance of personal data online (both shared knowingly and unknowingly) and its value have created a new era of individual responsibility. We believe that as individuals become more proactive in managing their own security, the personal security market is poised for significant growth, offering a wide range of opportunities for companies that can provide innovative and effective solutions.

This finds its manifestation in the proliferation of data breaches and spam calls, has become a major concern for individuals and businesses alike. In 2023, approximately 56 million Americans lost money to phone scams, with total losses reaching an estimated $25.4 billion annually. These scams often involve impersonating loved ones or authority figures, leveraging highly personal information to solicit urgent financial assistance or sensitive information.

This is exacerbated by the fact that scams and misinformation campaigns will only become more sophisticated from here on forward as they leverage AI-powered voice cloning and deepfake technology. This starts what we often refer to as an evolutionary arms race between the deceiver and the detector. In this environment of heightened risk and uncertainty, individuals take a more proactive approach to their security. 

Moreover, as societies become more polarized, personal information is easily accessible, and doxing becomes more prevalent, we see this sense of perceived risk to also spill over into the real world. 

We believe that the opportunity can take various forms. From cutting-edge digital identity protection solutions to counter deep fake solutions to physical home security platforms, personal security companies are leveraging technology to empower individuals and provide a sense of control over their safety and well-being.

July

Thesis

The Robotics Smiling Curve

Embodied AI reallocates value from hardware to intelligent foundation models and specialized vertical solutions, fueling leaps in productivity across complex tasks.

January 2025

Infrastructure Tech

Where will value flow as embodied AI takes off?

We are convinced that AI, deployed in robotics systems with the unconstrained ability to navigate and interact in the physical world, will be one of the biggest unlocks of productivity and abundance in our lifetime. The convergence of tumbling hardware costs, breakthroughs in AI, and mounting pressure for automation across critical industries has created an unprecedented opportunity for transformation in how physical tasks are performed.

What started 50+ years ago with the optimization of rote industrial tasks has evolved through distinct phases: first, the automation of controlled, repetitive workflows like warehouse pick-and-place operations, and now, the potential to handle end-to-end responsibilities in complex, multi-dimensional environments—from factory floors to healthcare facilities to homes.

This evolution comes at a critical juncture. Labor shortages in key industries, aging populations, and shifting supply chains in response to climate change and geopolitical pressures have created an urgent imperative for modernization. In industrial settings, where ROI drives decision-making, robotics solutions are already catalyzing larger automation budgets. In consumer settings, where emotional factors play a larger role, mounting evidence (e.g. Waymo adoption) suggests growing readiness for automation in everyday tasks.

As with any market opportunity, we are interested in understanding which technological and commercial capabilities are most scarce (and thus most valuable) and along with that, which parts of the value chain emerging companies are best positioned to win. 

Technological Tailwinds

The massive talent and capital flows into robotics over the past few years have been catalyzed by an unprecedented convergence of technological breakthroughs. This convergence is moving robotics from a hardware-centric paradigm (led by companies like ABB and FANUC) to one where intelligence and deep workflow integration capabilities drive market power.

At the core of this shift is the emergence of multi-modal foundation models that sit at the intersection of language understanding, vision perception, and spatial awareness. As DeepMind's Ted Xiao observed in his survey of 2023's breakthroughs, we're witnessing not just technological advancement but a philosophical transformation: "a fervent belief in the power of scaling up, of large diverse data sources, of the importance of generalization, of positive transfer and emergent capabilities."

This shift is backed by technological progress across several dimensions:

Transformer architectures have opened entirely new possibilities for how robots process and act upon information from the physical world. Projects like Google's RT-X and RT-2 and TRI's work on General Navigation Models demonstrate the potential for end-to-end, general-purpose automation of dynamic physical interactions. These advances are particularly powerful in their ability to turn abstract concepts ("verbs") into context-specific actions – understanding, for instance, the crucial differences between opening a door and opening a phone.

  1. The hardware equation is rapidly shifting in favor of commoditization and widespread deployment. The emergence of cheaper, modular components across perception (cameras, radar, lidar), control (motors, actuators), and power systems is making the economics of cognitive robotics increasingly viable. Companies like Unitree are demonstrating how quickly hardware capabilities can advance when paired with improving intelligence layers. Perhaps more importantly, as these intelligence layers improve, robots can achieve more with simpler hardware configurations – a virtuous cycle that further improves deployment economics.

  2. Advances in computing infrastructure, both in cloud environments for heavy workloads and at the edge for real-world autonomy, have expanded the frontier of possible applications. This is complemented by breakthroughs in simulation, synthetic data generation, and cross-embodiment learning that promise to help robotics overcome its historical data scarcity challenges.

However, these tailwinds – and the ability for companies to defend technological advantages – are not evenly distributed across the value chain. For this reason, we believe the Smiling Curve is a useful framework for understanding where and how value will accrue in embodied AI.

In short, we see the most value flowing to i) foundation/world models that can generalize across tasks and embodiments and ii) specialized applications that can leverage these capabilities to solve high-value problems in complex domains. The traditional middle of the value chain – hardware manufacturing and systems integration – faces increasing pressure as intelligence becomes more important than mechanical sophistication. Similarly, data generation labeling, and processing will also face downward pressure as big tech companies with ample access to data seek to drive commoditization to benefit other parts of their business (in robotics and beyond).

This creates two paths through which we believe emerging companies have the biggest advantage in sustainably creating value.

Robotics Foundation Models

Robotics foundation models have the potential to be the operating systems and action layer for the physical environment, transforming commodity hardware into real-world agents.

For RFM companies, we see “data gravity” as a key to success – the ability to create self-reinforcing loops where model improvements drive adoption, which in turn generates more valuable training data. Unlike language models, which could draw on the vast corpus of human-generated text on the internet, robotics models face a fundamental data scarcity challenge. Outside of self-driving vehicles, no one has accumulated the volume of real-world interaction data needed to train truly general models.

This scarcity creates a unique strategic opportunity. A company that can solve the data acquisition challenges through strategic partnerships and deployment models will build powerful network effects. As their models improve, they become more valuable to hardware partners and application developers, generating more deployment opportunities and thus more data – a virtuous cycle that becomes increasingly difficult to replicate.

Vertical Robotics: Deep Integration and Domain Expertise

At the other end of the curve, we see compelling opportunities for companies that can deeply embed robotics capabilities into important workflows in critical industries. These companies succeed not through general-purpose intelligence, but through their ability to solve complex, high-value problems. 

We believe vertical robotics approaches are most valuable where:

  • The workflows governing interactions between robotics and operational systems are highly complex

  • Social dynamics and regulatory requirements favor trusted brands with deep domain expertise

  • The cost of failure is high, creating strong incentives to work with specialists

  • Domain-specific data creates compounding advantages that are difficult for generalists to replicate

Companies like Gecko Robotics (July portfolio company) in industrial inspection exemplify this approach. Their competitive advantage stems not from robotics capabilities alone, but from the domain-specific meaning they extract from collected data. This creates a different kind of data moat – one built around understanding the nuances and edge cases of specific applications rather than general-purpose interaction. It also creates a wedge to expand deeper into a customer’s operations, both via increasingly intelligent workflow tools and more advanced robotics solutions. In addition to inspection, categories like defense & security and construction represent prime areas for vertical solutions to create value. 

Vertical robotics opportunities also force us to consider whether emerging companies or incumbents are best placed to succeed. Despite the massive amounts of capital invested in recent periods in logistics and warehouse robotics, outcompeting Amazon, which has famously externalized many of its cost centers into massive businesses to the detriment of venture-backed competitors, is a tall order. Likewise, consumer distribution and brand advantages held by companies like Amazon and Meta place most new companies at a significant disadvantage.

The Interplay Between RFMs and Vertical Solutions

We also believe there is significant potential for interaction between companies at the two ends of the curve; e.g. Gecko integrating a model from Physical Intelligence. Vertical solution providers can become valuable data partners for foundation model platforms, providing real-world interaction data from high-value use cases. Foundation model platforms, in turn, can help vertical solutions expand their capabilities without massive R&D investment in core robotics intelligence.

July

Thesis

Frontline Audio and Video

Next-generation platforms that combine AI-powered language understanding with advanced audio-video capture are set to revolutionize frontline work by transforming raw field data into trusted, industry-wide operating systems.

December 2024

Industry Transformation

Only a few years ago, while touring the new maintenance facility of a publicly traded aerospace company, an executive pointed out several innovations: automated tool checkout, more advanced safety equipment, and a (physical) file room located closer to the operating floor than ever before. That this last feature was included is telling. The feedback loops between frontline action and data input are central to the operations of many industries – manufacturing, policing, and trade services of all varieties (from plumbing to solar installation). Key elements like pricing estimates, project timing, and resource requirements are often functions of what workers are observing in the field or on the factory floor. 

Despite comprising the majority of the global workforce, frontline workers have been largely left behind by technological transformation. The inefficiencies are stark: law enforcement officers spend up to four hours per shift on documentation, with 96% reporting these demands keep them from core duties. In 2021, nearly ¾ of frontline workers were still using paper forms. But workers are ready to adopt new solutions. In manufacturing 93% of workers believe software tools help them perform better and 96% would be willing to accept increased data monitoring in exchange for benefits like improved training and career development.

The convergence of several forces is creating an unprecedented opportunity to reshape frontline work and fundamentally change how operational knowledge is captured and leveraged. Advances in language understanding mean systems can now adapt to how workers naturally communicate, uncovering deeper context without forcing rigid input structures. Improved video processing and computer vision adds meaning to streaming footage, while ubiquitous mobile devices and sensors enable both active and passive capture (which also contributes to a safer –, hands-free, eyes-up – working environment) . The maturation of retrieval-augmented generation (RAG) technology makes it possible to connect this unstructured frontline data with existing knowledge bases – from maintenance manuals to captured tribal knowledge – creating powerful feedback loops between observation and action.

The winners in this space will build trust by solving acute pain points in documentation and training, then expand to become essential operating infrastructure for their target industries. We see distinct opportunities across market segments. For SMBs – independent trades (“America’s new millionaire class”), farms, medical practices – these solutions can function from day one as a sort of COO and assistant, both improving operations and increasing enterprise value by making tribal knowledge transferable in eventual exits. For larger companies with field forces, manufacturing operations, or driver fleets, these tools accelerate training, surface best practices, and build operational continuity.

In both cases, we believe frontline audio and video capture will serve as the data wedge to become the system of record and intelligence for entire operations. Winners will need vertical focus – the needs of a solar installer differ meaningfully from those of a manufacturer or farmer. Trust and deep industry understanding are critical, as these companies will increasingly look to serve as the automated action layer for their customers, with business models that reflect the value they create (i.e. outcome-based pricing). The platforms that successfully capture and leverage frontline insights won't just become systems of record for individual companies – they'll emerge as the operating systems for entire industries, fundamentally reshaping how skilled frontline work gets done.

July

Thesis

Outcome-Based Pricing

A dominant narrative around how economic models will shift in response to AI is that companies can now “sell the work

December 2024

Infrastructure Tech

A dominant narrative around how economic models will shift in response to AI is that companies can now “sell the work” – replacing seat-based pricing or subscription fees with models more directly tied to the value they create. This trend mirrors the evolution of digital advertising, where sophisticated attribution and optimization layers emerged to maximize and capture value.

Early evidence of this transformation is showing up in software verticals with well-defined (and highly measurable) workflows. In October, Intercom reported that 17% of recent software purchases included outcome-based pricing for their AI capabilities, up from 10% in the previous six months. One customer using Intercom’s “Fin” chatbot, RB2B, said the system autonomously resolved 60% of customer support tickets in August, saving 142 hours of human work. At $0.99 per resolution versus $10 for human handling, this represents both dramatic cost savings and a new pricing paradigm tied directly to outcomes.

As AI capabilities accelerate, we expect a rapid build-out of supporting infrastructure focused on enabling and capturing this value creation and cementing this new economic paradigm. The demand side is already primed – companies face increasing pressure to deploy AI in high-ROI use cases, knowing their competitors (or new AI-native entrants) will if they don't. 

This dynamic is driving the emergence of several distinct outcome-based business models:

  1. Full-stack players aiming to fundamentally reshape the economics of critical industries (particularly those resistant to new technology adoption) represent the purest path to AI-driven outcome-based pricing. Companies like KoBold in mining aren't simply delivering a point solution to an existing value chain – they are using AI to transform how value is created and captured across the entire workflow. In doing so, they take on the full risk/reward that comes with attempting to reorient the economic structure of a deeply entrenched system. Similar opportunities exist in healthcare, where AI-driven approaches could dramatically reduce cost structures while improving patient outcomes, and in commercial real estate, where end-to-end platforms can reshape everything from building operations to tenant experience to energy management.

  2. End-to-end workflow solutions in well-defined/quantitative areas like sales (Salesforce) or customer service (Intercom, Zendesk). Here, we believe emerging AI-native players face a significant uphill battle. Incumbents that cover multiple steps of a company’s workflows have data, distribution, and value attribution advantages while more companies are pursuing internal builds through "spontaneous software" tolling or by leveraging commodity infrastructure (LLMs) to develop custom solutions – as Klarna recently did to great fanfare and apparent success. The company’s OpenAI-powered chatbot is “doing the work of 700 people” as it handles ⅔ of the company’s service interactions. 

  3. Infrastructure players are emerging to accelerate the adoption of outcome-based business models for AI services. We see opportunities for new solutions to handle attribution (measuring AI's impact across complex workflows), market-making (matching AI capabilities to business problems while optimizing for ROI), and financial infrastructure (enabling novel pricing structures). The parallel to mobile advertising is particularly instructive – companies like AppLovin didn't just facilitate transactions, they fundamentally transformed how value was created and measured in their market. These infrastructure players won't just serve existing markets – similar to Stripe in software, they'll expand the opportunity by making it possible for new types of AI services to emerge and scale.

  4. We also expect to see the emergence of teams that develop superior "process power" in AI implementation. Similar to how some organizations mastered lean manufacturing or agile development, these teams will systematically identify industries where AI can collapse cost structures (while maintaining value delivered), rapidly prototype and deploy AI solutions that replace expensive, manual workflows, and build durable institutional knowledge about which AI approaches work best for specific business problems.

    One way of thinking about this opportunity is as a modern version of Rocket Internet or Thrasio, but instead of geographic arbitrage or aggregation plays, they'd specialize in AI-driven transformation of stagnant sectors via an integrated product and go-to-market engine that allows them to capture a commensurate share of the value they create in an ecosystem. Perhaps a more ambitious framing is that a new class of private equity giants will emerge from this paradigm of buying and improving software and service businesses with AI (i.e. modern Constellation Software). 

Unsurprisingly, we believe the most attractive opportunity lies not in incrementally improving existing services with AI, but in fundamentally reimagining how industries operate. This leads us to two areas specifically that we are most intrigued by:

  1. Infrastructure providers enabling more precise outcome measurement, verification, optimization, and value capture across the AI services economy.

  2. Full-stack players who combine AI capabilities with deep domain expertise to fundamentally transform industry economics.

July

Thesis

Infrastructure + Industrial OT Security

Protecting critical digital and physical assets

November 2024

Infrastructure Tech

The industrial landscape is undergoing a substantial transformation towards network-driven operations defined by a massive number of new connected devices and software-driven coordination.

In energy, distributed production and smart grids are replacing centralized power plants. EV charging networks, real-time traffic coordination programs, advanced telematics and autonomy systems, and distributed warehousing models are changing the footprint of how we move goods and people. Factories are bringing production equipment online for the first time, enabling dynamic condition monitoring and remote intervention. Advanced robotics are beginning to arrive en masse, increasing the importance of connectivity and software (data) in nearly every corner of the industrial economy.

Across much of this evolving landscape, security remains an afterthought. Major initiatives, like the White House's $7.5 billion Bipartisan Infrastructure Law to build a national network of EV chargers, focus on how new solutions will be deployed, not how they will be defended.

With nation-state conflict increasingly shifting to target critical private sector assets and infrastructure, the expanded attack surface exposed by this transformation is being exploited to devastating effect. High-profile attacks like 2017's WannaCry (North Korea) and NotPetya (Russia), 2021's Darkside attack on the Colonial Pipeline, and Volt Typhoon's (China) recent attacks on US water, energy, and transportation systems highlight the breadth of the challenges facing industrial companies.

In 2023, OT attacks on industrial assets rose by 19%, affecting over 500 physical sites across manufacturing, energy, transportation, and building automation systems. Companies like Clorox and Johnson Controls lost tens of millions of dollars because of OT attacks, while Applied Materials lost $250m in sales stemming from an attack on a supplier, MKS Instruments. Enterprise spending on OT cybersecurity will grow almost 70% from 2023 to $21.6 billion globally by 2028.

The Industrial OT market is defined largely by heterogeneity and legacy complexity. Unlike pure-play IT projects where enterprise deployments share common requirements, industrial OT projects are fragmented and bespoke thanks to physical constraints and hard-to-replace legacy systems.

Historically, customers made OT security procurement decisions in parallel with capital investments in new factories or machinery. While incumbents like Palo Alto Networks and Cisco have long seen the need and have moved aggressively to fill the gap, few companies have successfully shifted from services to recurring product revenue business models.

This is starting to change. As more modern, open hardware systems and sensors are deployed in industrial settings, technology companies can address larger portions of each customer's fleet of assets on day one. The convergence of IT and OT has helped to simplify the sales process. We believe this will spark a resurgence of interest for emerging OT security solutions and create the foundations for emerging companies,  capable of developing OEM-agnostic solutions that can be easily adopted and scaled across industries, to grow.


July

Thesis

European Public Safety Primes

With war on its doorstep and American attention shifting to the Pacific, Europe has been forced to prioritize resilience and strategic autonomy for the first time in a century.

November 2024

Industry Transformation

With war on its doorstep and American attention shifting to the Pacific, Europe has been forced to prioritize resilience and strategic autonomy for the first time in a century. But the threat to Europe’s way of life and future prosperity goes beyond front-line kinetic conflict. 

As the threat environment converges, the why now case for companies building for public safety and security in Europe, for Europe, gets stronger by the day. Migration pressures, cyber threats, and cross-border crime require capabilities that existing systems simply cannot deliver. Europe must invest more proactively in innovation across other public safety and security pillars: law enforcement, fire and disaster response, and intelligence.

Across markets, AI is driving a step change in our ability to understand and act upon the physical world. The convergence of AI with real-world data – cameras, drones, satellite imagery, and other sensory inputs – makes it possible to build an intelligence layer that processes complexity at an unprecedented scale. This is opening new frontiers in public safety and security. Companies that can harness, integrate, and analyze this explosion of data to drive faster, safer, and more accurate decisions stand to become category champions and play a key role in forming the foundations for Europe’s long-term growth and strategic autonomy. 

Across the world, advanced policing systems are delivering for forward-thinking law enforcement and border control agencies. Solutions like Flock solve over 700,000 crimes annually, making departments more efficient, while drones drive faster and safer responses. As resistance to US technology persists, expanding EU internal security budgets and increasing evidence that these systems work will push Europe to seek out homegrown solutions.  

Fire and disaster response, helping mitigate the €77b in annual losses from natural disasters and protect human livelihood, represents another critical opportunity area. New entrants combining predictive modeling of weather and the built environment with proactive intervention capabilities will capture value by closing Europe's critical response gaps.

Finally, intelligence agencies are approaching a breaking point: drowning in multi-modal data (from video to financial transactions) while inter-agency coordination fails. Companies that bridge European fragmentation while navigating privacy mandates will become essential infrastructure, enabling real-time response to physical and digital threats.

We see an opportunity for a new class of European "Public Safety Primes" to establish themselves in the market. The Axon playbook – now a $45b+ company built through methodical expansion by expanding from tasers to body cameras to a comprehensive digital evidence platform – shows what's possible. The company has effectively zero penetration in Europe, and local players like Reveal Media and Zepcom remain subscale. Winners will start narrow with a must-have product, earn trust through measurable impact, and expand across the public safety landscape as system-wide advantages compound.

July

Thesis

Composable Companies

Composable companies fuse enduring core expertise with agile, mission-focused teams to rapidly capture diverse market opportunities and redefine traditional business models.

November 2024

Infrastructure Tech

A new organizational model is emerging: the composable company - organizations that blend permanent infrastructure with fluid product capabilities. At their core, they maintain:

  • Capital and resource allocation expertise

  • Shared technical infrastructure

  • World-class talent

  • Possibly, Strategic customer and distribution relationships

By centralizing these unique capabilities, composable companies can swiftly identify, validate, and scale opportunities across its chosen markets. Around this foundation, teams can be rapidly assembled and reconfigured to pursue specific missions/product opportunities with various time scales.

This model excels in markets where opportunity spaces are in flu, and an organization needs to have flexibility and bandwidth to build out ideas that compound together around a coherent view of the future, but might find their manifestation in distinct products for distinct customers.

Recent developments in AI further enhance this model's viability by enabling more cost-effective creation of software and supporting customization for specific use cases:

  • Reducing software development costs

  • Streamlining maintenance requirements

  • Improving customer support efficiency

  • Enabling more cost-effective creation of AI tools

The Resulting Structure

The end product could be a holding company-style enterprise that combines:

  • The above-described core infrastructure

  • Multiple AI product and tools with varying scale and durability

This structure enables the efficient pursuit of numerous opportunities while maintaining the potential for further asymmetric returns from breakthrough successes among them or in aggre


July

Thesis

Marketplaces for AI-Enabled Services

AI-powered, asset-light platforms now empower creators and knowledge workers to build profitable one-person companies that disrupt traditional firms and democratize professional services.

October 2024

Infrastructure Tech

The Rise of One-Person Companies

The unbundling of the firm has been in flight for decades. As the internet enabled increased access to global labor markets, outsourcing to lower-cost countries exploded. The proliferation of cloud computing and mobile took this a step further, making it possible to externalize an increasing number of key operational functions and allowing for more asset-light business models. This prompted a thesis several years ago that the rise of “One Person Companies” remained an underrated opportunity. 

The next step in the evolution of the firm will build on this but will come at the problem from a different direction. It will be defined by the rise of One-Person Companies. Creators and knowledge workers will access external services that provide the capabilities to start and scale a firm and then re-bundle them in unique ways around their core skill set. They will monetize by selling products, services, and expertise to an owned audience that their core skill set has helped them build.

New platforms and infrastructure providers will emerge to support the tens of millions of individuals capable of building successful One-Person Companies along with the billions of consumers and businesses that will support them. More generally, the rise of the One Person Companies will inject dynamism into the broader economy and will play a role in driving more inclusive innovation.

AI – particularly agentic solutions capable of proactively understanding and executing end-to-end business workflows – represents the next leap in this evolution. As several investors and operators have observed, AI is empowering small groups more than ever before and new businesses across the economy (i.e. not just tech startups) are building from inception with AI literacy as a core competency. According to Gusto, roughly a fifth of businesses created last year said they were using generative A.I. to more efficiently carry out things like market research, contract reviews, bookkeeping, and job postings.

Current and Future Market Structure

In complex, non-commodity service categories like strategy consulting, law, investment banking, and wealth management – where key individuals inside of large companies often already “run their own book” – we believe these forces create the opportunity for further fragmentation; i.e the “creator economy-ization” of professional services.

A study cited in a 2015 Forbes article about million-dollar solo consulting businesses indicates this opportunity is not new. 

The U.S. Census Bureau found that of 3.2 million "nonemployer" professional services businesses in the U.S., there were 7,276 firms that brought in $1 million to 2.49 million in revenue in 2013, the most recent year for which statistics were available. And 321 superstars brought in $2.5 million to $4.99 million.

For the sake of simplicity throughout the document, we will refer to these companies as Service OPCs, though there is of course no reason why it must be a single person.

In practical terms, we believe we are entering a period where an even larger number of individuals or small teams with a differentiated skill set or value creation mechanism (product) will increasingly be able to leverage the marketplace (instead of “the firm) for distribution and operational capacity to build profitable and durable OPCs.

This thesis rests largely on the idea that some elements of human judgment are inherently non-scaleable / automatable (similar to our thesis around where value is captured in AI-driven content creation) and thus that the dynamics of the market will tend more towards specialization – thousands of small, profitable “winners” – rather than winner-take-all. 

A services Mittelstand rather than Silicon Valley concentration.

We are interested in what the agentic technologies that drive end-to-end workflow execution will look like and what the coordination mechanism across those autonomous services will be for Service OPCs. Without both of these things becoming a reality in parallel, the overhead of identifying and managing dozens of end-to-end AI agents (some of which will inherently be more end-to-end than others) while growing a client base and playing the most important role of delivering the differentiated service (even if some elements are made more efficient through automation) is likely enough to push promising OPCs back into the arms of McKinsey or Kirkland Ellis.

Effectively, we believe there is a Shopify-like opportunity to “arm the rebels” and build an ecosystem-level operating system for the AI-driven services transformation – combatting empire-building incumbents who view AI as a solidifier of their market positioning and what are sure to be dozens of overfunded venture-backed businesses promising to be “the next Goldman Sachs”.

Product and Technical Hypothesis

By engaging at the aggregation and coordination level, we are interested in answering the question of how a platform might “package” a business around an OPC’s core skill set to help it grow beyond its pre-AI agent potential. 

While we want to avoid being overly prescriptive in our analysis at such an early stage, we believe that for such a platform to represent a comprehensive – and attractive – alternative to the firm for Professional Service OPCs, it would possess some or all of the following characteristics (features), listed roughly in order to how they might be sequenced from a product perspective:

1. Functional Automation (Operational Capacity) – This pillar would serve as an "Agent Store," featuring both proprietary workflows and third-party end-to-end AI agent solutions. It would offer OPCs end-to-end functional agents for various business operations, such as:

  • Contract management

  • Financial management and forecasting

  • Compliance and risk assessment

  • Resource allocation and project management

  • Continuous learning and skill development

  • Marketing and public relations

  • Legal execution

It is also interesting to consider how such a store could provide a distribution channel for third-party developers of specialized AI solutions like Devin (for software development) or Harvey (for legal services) or the seemingly dozens of AI agent companies launching each week (a quick scan of the most recent YC class highlights how prevalent this model has become for early stage companies. 

These developers would be incentivized to use the platform due to its focus on going beyond simply offering access to agents but, helping OPCs “package” a specific set of agents around the skills and ambitions of the company, which brings us to the next pillar of the platform. 

2. Organizational Coordination (The AI COO) – The AI COO acts as the central nervous system of the OPC, ensuring all parts of the business work together seamlessly. Key functionalities include:

  • Automated integration between functional agents (the Bezos API Mandate on overdrive)

  • Workflow optimization across all business functions

  • Stakeholder communication management

  • Strategic decision support

  • Continuous improvement engine for business processes (i.e. vetting and implementing improved solutions or workflows autonomously). 

This pillar is critical in attracting and retaining both OPCs and third-party AI solution providers. For OPCs, it offers unprecedented operational efficiency and is the core enabler of leaving the firm behind for good. For AI solution developers, it ensures their tools are integrated effectively into the OPC's operations, maximizing utility and, long-term revenue potential.

With these three pillars working together, such a platform aims to create a robust ecosystem that not only supports individual OPCs but also fosters a network of AI solution providers. This symbiotic relationship between OPCs, the platform, and AI developers has the potential to drive rapid innovation cycles and expand the market in the same way Shopify has done in e-commerce for physical goods. 

Antithesis

While we have a reasonable degree of confidence that the end state of the unbundling of the firm will look something like what we have laid out above (“Shopify for professional services” is likely a primitive analogy for what we will have in 2050), there are several reasons to be wary of the thesis. Much of this hinges on market timing as well as the common question of whether this will enable truly novel business models to emerge that incumbents are structurally unable to compete with.

  • We may be underestimating incumbent entrenchment, particularly around trust and social signaling, and their ability to adapt. “Nobody got fired for hiring McKinsey, Goldman, etc.”. While not apparently on the operational level (yet), incumbent consulting firms have been among the biggest beneficiaries of the generative AI explosion. 

  • Regulatory, compliance, and legal structures may change slower than the technology,  Sectors like law and finance are heavily regulated. OPCs might face significant hurdles in meeting compliance requirements without the resources and infrastructure of larger firms, potentially limiting their ability to operate in certain (high-value) areas.

  • The complexity of integration (i.e. the AI COO) may be substantially more complex than we have described. The reality of seamlessly integrating multiple complex AI systems could be far more challenging and error-prone than expected, leading to inefficiencies or significant mistakes.

July

Thesis

Benchmarking the Physical World

Standards are the hidden force behind market efficiency, capital formation, and global trade.

October 2024

Infrastructure Tech

Standards are the hidden force behind market efficiency, capital formation, and global trade. From the meter to the shipping container, standards create the shared layer of trust that helps markets function and capital flow.

In 1860, at the outset of America’s railroad frenzy, Henry Varnum Poor published “History of Railroads and Canals in the United States”. This work was the first attempt to arm investors with data on the burgeoning industry and laid the foundations for what is now Standard & Poors — a $100b+ company with $11b in annual revenue. Alongside its long-lived triopoly counterparts, Moody’s and Fitch, it has persisted thanks to powerful standard-based moats that make their frameworks critical infrastructure for global capital markets.

“We think of ourselves as a benchmark company. I mean, data is in vogue now, and people are really kind of a bit obsessed with data and data companies… I think data is nice, it’s interesting. But if you could turn something into a benchmark, it really transcends data.”
SVP at S&P Global, November 2020

As Mark Rubenstein wrote in “The Business of Benchmarking”, universal standards are usually unassailable. The risk for companies that manufacture them is less that their moat is crossed and more that their castle becomes irrelevant. We believe the current geopolitical, economic, and technological moment is creating a once-in-a-generation opportunity to successfully counterposition and eventually (finally!) displace the global ratings and benchmarking oligopoly.

Several forces are converging to create this opportunity. First, Great Power Competition is fundamentally reshaping global trade and industrial activity. The push for energy independence, secure supply chains, and strategic autonomy is driving massive investments in decarbonization and reindustrialization. Reconfigured trade flows and industrial priorities demand new frameworks for understanding risk and opportunities. Second, the growth of sensor networks, connected devices, and geospatial systems has created unprecedented visibility into physical world operations and trade flows. This proliferation of data  – from factory floors to shipping lanes – provides granular, real-time insights that were previously impossible to capture. Finally, advances in AI and machine learning allow us to process and derive meaning from complex, multi-modal data at the scale and speed demanded of modern trade. 

We've seen the fundamental transformation of key commodity markets firsthand through our investment in KoBold Metals. Better collection and analysis of physical world data is revolutionizing resource discovery and development. Meanwhile, geopolitical machinations are accelerating the reconfiguration of global supply chains and trade routes, creating urgent demand for new frameworks to understand and price physical world assets. Traditional frameworks – built for a different era of global trade – are increasingly misaligned with markets that require real-time, granular insights to make decisions.

Success in this market isn't about attacking the incumbent oligopoly directly. Through counterpositioning, the opportunity lies in building for the new industrial economy with a model native to the speed and complexity of modern trade. Winners will start narrow, building density of data and trust in specific verticals, before sequencing alongside their customers' evolving needs to develop new pricing and risk infrastructure for the physical economy.

July

Thesis

Ports, Shipbuilding, and Marine Logistics Infrastructure

Vertically integrating our way to maritime parity

September 2024

Industry Transformation

Control over maritime infrastructure represents a significant geopolitical and economic advantage. China identified this importance early and is leveraging this reality, consolidating power and influence through strategic port investments. Today, China controls upwards of 90 major international ports, largely in the developing world. Meanwhile, the US doesn’t operate any major ports abroad. Europe also lacks a significant global presence and has ceded many critical assets to China

Shipbuilding deficiency goes hand in hand with the shifting balance of naval power. China’s shipbuilding capacity is orders of magnitude greater than the rest of the world combined. 

There is an imperative and an opportunity for Western innovation to serve as a counterweight, thereby rebalancing the scales of influence. However, the barriers to achieving this are significant. Union influences and regulatory constraints in the US and Europe have impeded Western innovation in maritime infrastructure, logistics, and production. Even with the will to counter China’s Belt and Road maritime strategy, we currently lack the process knowledge and technological capacity to compete effectively.

The importance of building more effective maritime infrastructure goes beyond geopolitical influence. Increased global trade, driven by standardization like the container ship, has had a transformational positive impact on global poverty. Ports and maritime infrastructure also have a critical role to play as catalysts for a decarbonized shipping industry and as sustainable energy and industrial hubs:

A core assumption is that the pathway to meaningful transformation can't be solely based on developing technology for the existing market. That is to say that selling software and automation into the existing market – whether to ports or shipyards – won’t be enough. Because of the regulatory roadblocks mentioned above, emerging companies have limited distribution power. 

Instead, we believe in a more integrated approach, in line with the view that a degree of vertical integration is often a necessary component of new industrial innovation. Companies building and operating ports will likely integrate back to vessel operations themselves. Shipbuilders will need to be ship designers. And so on. 

For example, we should explore the possibility of acquiring, building, owning, and/or operating ports and physical marine infrastructure. This has the potential to create a symbiotic relationship where we a company can be both the innovator and the “first and best customer”, allowing it rapidly test, iterate, and improve its solutions in a real-world setting. Here, we can look to companies like Resilience and their end-to-end biomanufacturing model (acquiring/retrofitting manufacturing sites, developing automation technologies) for inspiration. KoBold Metals is another, of course. 

The preliminary thesis, therefore, suggests that a strategic blend of technology development, port ownership/control, and operational excellence can provide a Western counterweight to China's marine infrastructure strategy and contribute to the development of a safer, more effective global trade system via increased automation, transparency, and standardization.


July

Thesis

Sustainable Construction

Construction is one of the world’s largest industries.

September 2024

Industry Transformation

Construction is one of the world’s largest industries. Global construction spending in 2023 amounted to some $13 trillion, 7% of global gross output. It is also one of the most unproductive sectors of the economy. Tight labor markets, regulatory complexity, and systemic fragmentation along with cultural inertia have contributed to stagnation and a lack of technological penetration

This ineffectiveness does not discriminate by project size or scope. While nearly everything we touch and consume is produced in mass quantities, factory-produced homes still make up a small percentage of the overall new housing stock. Meanwhile, 98% of mega-projects experience cost overruns of 30% or more, and 77% face delays exceeding 40%. The impacts on broader economic growth are significant. Had construction productivity matched that in manufacturing over the past 20 years, the world would be $1.6 trillion – 2% of GDP – richer each year. Increasing pressure to decarbonize places additional stress on the low-margin, change-resistant industry. Through both operations (28)% and materials/inputs (11%), buildings account for nearly 40% of global emissions.

These supply-side deficiencies come against a backdrop of rapidly expanding demand – by 2040, the industry needs to expand production capacity by 70%+. This is creating a desperate, and long overdue, search for answers that we believe can only be met by a combination of technological innovation and novel production and business system design. 

While prior attempts to transform construction – most notably Katerra – have failed, several factors are converging to create a more compelling why now moment. Novel materials like green steel and low-carbon cement are approaching commercial viability, while mass timber innovations make building faster and less wasteful – while delivering significant carbon sequestration. Construction robotics focused on autonomous assembly, logistics, and data capture can address the labor gap. Perhaps most importantly, advances in generative design and AI-powered collaboration tools can help target the small but critical coordination inefficiencies that have historically bottlenecked progress – precisely the type of system-wide improvements that Amdahl's Law suggests are essential for meaningful transformation.

We believe the companies that capitalize on this moment will do so through one of two models. The first is selective vertical integration – controlling critical capabilities in materials, design, and manufacturing, but executed with greater focus and patience than previous attempts. The second is a platform approach that centralizes key material and system design and standardizes interfaces between stakeholders while allowing specialized players to focus on their core competencies – similar to how semiconductor manufacturing evolved.

Both models recognize three essential elements that must work together: First, standardized approaches to next-generation materials that maximize both assembly efficiency and carbon benefits, from green steel to mass timber. Second, digital infrastructure that enables true system-wide optimization and seamless stakeholder coordination. Third, controlled manufacturing environments that bring automotive-style productivity to strategic components, whether owned directly or orchestrated through a network of partners.

July

Thesis

Contested Communication, Navigation, and Connectivity

Opportunities in the electromagnetic spectrum

May 2025

Infrastructure Tech

The electromagnetic spectrum has emerged as the primary battleground across defense, security, and commercial applications. As adversaries increasingly deploy sophisticated, yet inexpensive, jamming and spoofing capabilities, traditional communications and navigation infrastructure is struggling to keep pace. In Ukraine, Russian EW reduced GPS-guided artillery accuracy from ~70% to just 6%, while disrupting tactical communications across multiple frequency bands. In the Red Sea, Houthi interference has affected both ship navigation and satellite communications, showing that even non-state actors now possess substantial spectrum-denial capabilities. Electronic warfare now sits at the center of the conflict between India and Pakistan. 

As other technical components are commoditized via rapid hardware iteration cycles (i.e. “hardware at the speed of software”) for smaller systems and converging capabilities for larger platforms, the ability for militaries and infrastructure providers to defend against electromagnetic attacks is critical.

Over the last couple of years, several categories of capabilities have emerged around electronic warfare deterrence: (1) resilient communications leveraging frequency-hopping, mesh networking, and LPI/LPD waveforms; (2) alternative PNT solutions including distributed reference networks, vision-based positioning, and enhanced inertial systems; (3) spectrum situational awareness through distributed sensing and rapid electromagnetic analysis; and (4) operational resilience via multi-modal redundancy, AI-enabled adaptation, and distributed decision-making.

Current solutions are largely built by defense primes and legacy telecommunications companies. This market structure has resulted in fragmented, expensive solutions with limited civilian adoption and minimal cross-domain innovation. However, several catalysts are creating opportunities for new players, including the proliferation of software-defined hardware and sensors, autonomous systems requiring reliable communications and positioning, commercial technologies (like Starlink) proving resilient in conflict zones, and AI enabling new approaches to signal processing and spectrum management.

While the capabilities are clearly critical, we have questions about whether these standalone solutions will command premium pricing through sustainable differentiation or become capabilities integrated into larger platforms (with distribution power resting with the primes/hardware developers). The answer may vary by segment with specialized military applications supporting premium pricing due to certification requirements and mission-critical outcomes, while commercial applications may see contested operation capabilities become table stakes, with value accruing to system integrators and platforms.

While we generally remain skeptical about the emergence of platform companies (spanning defense and civilian domains) in this space given the challenges highlighted above and the overall fragmentation of the market in terms of requirements and use cases. However, we see a path for highly capital-efficient scale for companies that can radically collapse the cost structure, dramatically improve usability, and are set up for rapid adoption, thus avoiding some of the program-level integration that slow distribution and shift power to larger players.

July

Thesis

Whole Genome Sequencing

Driving down the cost of precision healthcare

April 2025

Infrastructure Tech

The dramatic decline in whole-genome sequencing costs—falling exponentially faster than Moore's Law—has positioned genomics as a foundational technology set to transform healthcare, science, and biotechnology. Historically, sequencing a single human genome cost billions during the Human Genome Project; today, that cost has plummeted toward a hundred dollars per genome, driven primarily by breakthroughs in sequencing hardware (e.g., Illumina, Oxford Nanopore, PacBio, Ultima Genomics), sophisticated software infrastructure (e.g., alignment algorithms, AI-driven variant calling like Google's DeepVariant), and cloud-based bioinformatics pipelines.

This infrastructure can be conceptualized as a clear technology stack: at the base, specialized sequencing hardware generates raw genomic data; above this sits the sequencing layer, transforming raw signals into digital sequences; followed by a dedicated analysis layer, interpreting this sequence data into actionable insights; finally, the application layer utilizes these insights to deliver tangible value across clinical diagnostics, personalized medicine, synthetic biology, agriculture, and consumer wellness.

As sequencing costs continue to decline, the implications expand dramatically. Routine genomic sequencing will transition from a specialized procedure into everyday clinical diagnostics, akin to common blood tests. Individuals could receive continuous genomic monitoring—tracking genetic mutations, predispositions to disease, or response to treatments over time. This ongoing genomic "surveillance" would enable unprecedented early disease detection and proactive healthcare management at scale.

Beyond routine diagnostics, ultra-cheap sequencing unlocks massive scientific opportunities. With the ability to economically sequence millions of genomes, researchers can accumulate vast genomic datasets. These datasets, previously inconceivable due to cost constraints, will empower machine learning models capable of uncovering complex biological patterns and causal relationships that currently elude scientific understanding. Consequently, genomics-driven AI has the potential to illuminate fundamental biological mechanisms, significantly accelerate drug discovery, and foster innovation in synthetic biology by providing comprehensive, real-world training data.

In essence, the falling genomic sequencing cost curve is not merely an economic curiosity—it represents an inflection point in biology, medicine, and biotechnology. By transforming genomic data into an accessible, ubiquitous commodity, society stands at the brink of a profound shift toward precision health and accelerated scientific discovery, powered by genomics at unprecedented scale.

Key Questions:

  • On a technology perspective: How likely is it that sequencing will become cheap enough for widespread adoption? What defines "cheap enough," and what kind of new markets or applications could be unlocked by this?

  • On stack opportunities: Which part of the genomics technology stack is most compelling for investment or innovation? If genomics today parallels the semiconductor industry of the 1970s, is the infrastructure (sequencing and hardware) layer particularly interesting?

  • On investability and winners: How investable is this space, and what factors will determine the winners? Ultima Genomics appears promising, but what competing technologies exist? What are the regulatory hurdles, and how significant is the risk that established players might leapfrog emerging innovators?

  • On the analysis layer: Is the analysis layer less attractive due to the prominence of cloud solutions, open-source software, and powerful existing platforms? Or are there niche opportunities still worth pursuing here?

  • On application layer: What consumer or end-user applications will emerge from cheaper genomics? Which areas (clinical diagnostics, consumer health, synthetic biology, agriculture) present the most exciting opportunities for transformative impact and investment?

July

Thesis

Job-Specific Report Writing

Removing unproductive "work around the work"

April 2025

Infrastructure Tech

Workers across industries face a massive documentation burden that diverts them from core functions, creating an enormous drag on productivity. Law enforcement officers spend up to 4 hours per shift on paperwork, physicians dedicate nearly half their day to EHR tasks, and social workers spend over 50% of their time on case notes versus only 20% with clients.

The convergence of multimodal AI, ubiquitous mobile devices, and improved computer vision creates an unprecedented opportunity to transform how operational knowledge is captured and acted upon. Advances in language understanding now enable systems to adapt to natural communication, while retrieval-augmented generation connects unstructured frontline data with existing knowledge bases.

We believe in many cases, audio and video will serve as the data wedge that gives AI reporting solutions their initial foothold, then expand to become the system of record for entire operations. These systems will evolve from documentation tools to platform-level operating systems for their industries.

We are particularly interested in segments where reporting is not "the work" itself but represents a significant drag on core functions: Law Enforcement, Healthcare, Field Services, and Social Services.

This opportunity extends beyond frontline work to professional services (where reporting often is “the work”). In asset-based lending, for example, AI can streamline coordination between companies preparing reports for lenders and specialists (compliance, ESG, risk), creating separate analyses, potentially providing a wedge into higher-value financial services.

Incumbents today offer legacy documentation systems not designed for multimodal, real-time work.

We think the ideal value chain integrates data capture (multimodal inputs), processing (AI models tuned to specific industries), automated report generation, system integration, and analytics. Early entrants typically focus on capture and generation, integrating with existing systems rather than replacing them. 

However, full-stack players will emerge over time who aim to become the primary system of record by starting with documentation and expanding into adjacent workflows.

Several forces are accelerating adoption: acute labor shortages make documentation inefficiency untenable; multimodal AI to process audio, video, and text with domain-specific understanding (and do it in real-time) are coming rapidly down the cost curve and up the utility curve; and the proliferation of smartphones, body cameras, and sensors provides data at minimal cost. Frontline workers show increasing receptiveness with 96% indicating willingness to accept increased data capture in exchange for reduced paperwork, creating strong user adoption dynamics alongside clear ROI.

We see the most compelling opportunities in categories where:

  • Documentation consumes 40%+ of professional time

  • Current processes remain paper-based or use outdated systems

  • Workflows involve multimodal data (voice, video, text) AI can now process

  • Regulatory requirements add complexity that AI can navigate

  • Solutions can evolve from point applications to systems of record and define novel workflows

The most promising approaches will be vertical-specific with deep domain expertise rather than horizontal platforms. Successful companies will build from multimodal data capture, prioritize trust and usability, and pursue a "land and expand" strategy.

There's also significant potential for more aligned pricing models. Rather than traditional SaaS subscriptions, outcome-based pricing (tied to time saved, compliance improvements, or capacity created) may better reflect the transformative impact and capture more value, particularly in budget-constrained industries.

In financial and professional services, report coordination itself can serve as a strategic wedge. By streamlining complex workflows around documentation, AI systems can position themselves to capture higher-value activities, moving from optimizing document flow to facilitating transactions or informing capital allocation decisions. While potentially less immediately scalable than frontline reporting, these opportunities could unlock substantial value where information coordination directly impacts financial outcomes.

July

Thesis

Construction Robotics

Creating value on the robotics smiling curve

April 2025

Industry Transformation

Construction represents one of the strongest and largest pools of opportunity on the "vertical application" side of the Robotics Smiling Curve

With construction sector productivity growth under 1% annually for decades and 98% of megaprojects experiencing 30%+ cost overruns, the industry has massively underperformed. However, we are witnessing a critical inflection point where converging technological advances, economic pressures, and industry shifts are converging to drive robotics adoption at unprecedented rates.

While we have often been drawn to full-stack models in various sectors – e.g. KoBold Metals – we are seeing adoption take off faster in construction with more specialized plays. Even though the IRR on full-stack projects seems very strong (we’ve heard 50%+ in some cases), there is substantial friction in the system pushing back on an overhaul of the entire construction process.

Specialized solutions create immediate, measurable ROI that doesn't require waiting for an entire project lifecycle to validate viability. Developers, contractors, and financiers can see results within days or weeks rather than months or years and are able to compartmentalize risk, dramatically lowering adoption barriers and accelerating market penetration.

Construction robotics sits firmly on the high-value end of our curve for several reasons:

  • Deep domain expertise is essential. Construction sites represent "high situational diversity" environments with unstructured, dynamic conditions that resist generic solutions.

  • Workflow integration creates defensibility. Success requires seamless integration into complex construction workflows across multiple trades and stakeholders.

  • Specialized data builds moats. Data from construction sites creates proprietary insights / faster feedback loops that generalist platforms cannot easily replicate.

  • The cost of failure drives specialist adoption. Financial consequences of mistakes make construction firms more likely to trust specialists with demonstrated domain expertise who can help avoid cascading delays/overruns.

The economics now pencil out thanks to a continued rise in labor costs and rapid decreases in robotics hardware costs (along with increased modularity). Meanwhile, technical barriers around computer vision and edge computing are being solved and the growing adoption of digital design tooling provides the foundation robots need for reliable operation.

As a result, we believe the overall mindset of the industry has shifted. While still resistant to the full stack / modular builder approaches that attempt to revolutionize the entire building process, construction robotics companies are finding the greatest success by addressing specific high-value tasks. We see promising applications across several categories → layout and site surveying (an extension of our thesis on data-driven infrastructure management), earthmoving and excavation (off-road autonomy), structural assembly interior finishing, and demolition and hazardous work.

This task-specific approach, paired with business models that reduce risk for contractors/partners, aligns perfectly with our vertical robotics thesis—solving discrete, high-ROI problems through deep domain expertise and workflow integration rather than attempting full automation at once.

Our initial view here is that 2025 marks a critical inflation point where construction robotic solutions are beginning to show that substantial productivity improvements in the sector are, in fact, possible even with challenging labor and supply chain environments. 


July

Thesis

Scaling the Skilled Physical Workforce

Breaking a critical reindustrialization bottleneck

March 2025

The skilled industrial workforce represents one of the most significant bottlenecks to economic growth and energy transition across Western economies. As reindustrialization accelerates, the gap between labor supply and demand is widening dramatically, threatening sovereign objectives and creating a risk of even greater economic instability.

Recent data highlights the severity of this challenge. According to a 2025 CapGemini report, 87% of organizations anticipate significant labor shortages as the workforce ages, potentially impeding reindustrialization efforts. The US Chamber of Commerce reports that 45% of manufacturing job openings went unfilled in 2023. Germany alone faces a projected shortage of 350,000+ skilled energy transition workers – a gap likely to expand as a result of the country’s massive fiscal stimulus package targeting infrastructure development. 

Our conversations with industrial leaders consistently reveal that talent, not technology, is their primary constraint. While full automation promises to someday eliminate this bottleneck, the immediate path forward lies in solutions that transform how industrial skills are acquired, deployed, and scaled. 

We see this massive reshuffling of the global industrial economy as a rare wedge for companies to solve acute and immediate labor market pain points that will position them to expand in various directions to become fundamental infrastructure for critical industries. We see three distinct approaches to capturing value in this emerging category:

  1. Technology-Enabled Labor Pool Expansion – Companies in this category use automation, prefabrication, and digital workflows to reduce skill requirements, effectively expanding the addressable labor pool. These businesses typically operate as end-to-end service providers, controlling both the customer experience and the production process. Companies like Reframe Systems in construction exemplify this approach, with prefabrication and factory-based assembly methods allowing less-skilled workers to build high-quality structures under controlled conditions.

  2. Operational Standardization – This approach focuses less on technological transformation and more on establishing systematic operational playbooks for traditionally fragmented services. Companies standardize workflows, quality control, and customer interactions to create trusted brands in high-variation service businesses like machining (Isembard), renovation (Block), and specialty trades (Craftwork). Their competitive advantage stems from superior training systems, project management, and quality assurance rather than fundamental technology innovation.

  3. System Orchestration Platforms – This model involves building the connective tissue between labor supply and market demand. These platforms start by solving acute pain points in workforce access and management or training for a specific sector, then expand to adjacent markets categories. The growth of Workrise, which evolved from its start as an oil and gas labor marketplace (RigUp) to a comprehensive vendor and workforce management system for the energy industry, exemplifies this growth pathway.

Across all three models, the acquisition and development of talent remains the critical success factor. Companies are innovating in both initial training—creating accelerated pathways for career-changers to enter skilled trades—and continuous upskilling. The approaches above, and the best practices we expect industrial and field service customers to adopt, are consistent with our research on AI-enabled constellations of experts and multimodal tools for frontline workers: lower the barriers to effectiveness by removing friction around data capture, accelerating feedback loops, and making master-level insight more easily accessible. 

Companies that successfully scale the industrial workforce will become the operational backbone of the energy and manufacturing renaissance while positioning themselves to drive, rather than be disrupted by, the inevitable wave of industrial automation. By building the talent, data, and process knowledge layers of industrial operations today, these companies establish themselves as the essential connective tissue that future automation systems will augment (and rely on) rather than replace.


July

Thesis

Monte Carlos for Real Life

Thoughts on "Moneyball for the Mundane"

February 2025

Fundamental Consumer

Alternative titles for this might be “Moneyball for the Mundane” or “Tim Urban’s Big Green Tree as an (AI-enabled) service” – and we have touched on similar concepts in our constellations of experts memo.

With better data about an individual – choices, preferences, biases – leading up to today, shouldn’t systems be able to guide us toward a set of actions that will unlock what we perceive to be the ideal green path going forward? 

Daniel Ek has said in the past that one of the formative decisions Spotify made was capturing every possible data point about every user from day zero. What seemed costly and wasteful to many at the outset became the foundation for Discover, one of the most successful behavioral engines ever built. 

It is common for people to lament the scale of data exhaust created in our day-to-day activities. Every financial transaction, location we visit, and conversation.

But what if we don’t have enough data about ourselves?

What if we should be capturing everything we possibly can – everything we see through Meta Ray-Ban lenses, every call and conversation, and every biometric indicator imaginable in the hope that someone builds Spotify Discover for the real world?

Now, this is dangerous territory, and the Black Mirror episodes write themselves – from humans losing agency, blindly following LLM guidance to a further flattening of culture as models “Moneyball” our life.

However, there's an important distinction here. Sports are finite games with clear rules and boundaries – they can be "solved" through optimization. This inevitably leads to convergence on ideal strategies. Life’s most important activities look a lot more like infinite games. Career paths, relationships, creative pursuits – these domains have no clear endpoint and near-limitless possible states. In these infinite spaces, better pattern recognition and decision support could actually unlock more unique paths, not fewer. 

Like with GPS, we see a future where people, supported by AI, possibly rely less on their own sense of direction while also making people way more willing to be spontaneous and take risks and drive new places because they know the metaphorical GPS will always help them find their way.  Moreover, these systems – millions of local optimization engines assisting individuals to find their most valuable paths and optimize their own unique game – might enhance rather than diminish human agency by illuminating options we'd never see on our own, creating more diversity of outcomes rather than conformity.

Many great businesses have already been built around the notion of data-driven, adaptive personal improvement – Oura is a $5b+ business, Noom is estimated to have done $1b in ARR in 2023, and Duolingo recently touched a $20b+ market cap.

We believe there are more levels to be unlocked. 

Subscription models have thus far proven to be a great, business model for finding the balance between data capture/use and user agency. With better upstream and downstream data, is an even better alignment of interests possible; e.g. via outcome-based models? While large platforms are the natural aggregators of the data that will power such solutions, is an ecosystem of hypers-personal services on top of this data possible thanks to advances in agent infrastructure and the rise of spontaneous software development? And what are the technical requirements required to make this a reality (e.g. advances in model memory)?

While we don't claim to know exactly what form these companies will take, we're convinced that better data capture and analysis will unlock new paths to human flourishing that are currently invisible to us. Just as Spotify's prescient bet on comprehensive data collection revealed music we'd love but never discover on our own, better systems for understanding ourselves might illuminate versions of our lives we'd never imagined possible. Perhaps the future isn't about optimizing toward a single ideal, but about expanding the landscape of possibilities for each individual. 

Our "Constellation of Experts" thesis points to one potential direction, but we suspect there are many more waiting to be discovered.


July

Thesis

Protein / Bio Dev Tools

Accelerating drug development

February 2025

Industry Transformation

Advancements in AI-driven and computational tools are unlocking protein development, making it more predictable and efficient. Much like software developers rely on specialized dev tools, modern protein engineering now benefits from:

  • Machine Learning (ML) (e.g., transformer models such as ESM and ProGen)

  • Generative AI (e.g., diffusion models and protein language models similar to GPT)

  • Structure Prediction Tools (e.g., DeepMind’s AlphaFold)

  • Automation (e.g., lab systems integrated into closed-loop workflows)

These innovations are shifting protein engineering from an experimental-first approach to a design-first discipline—accelerating the process, reducing costs, and increasing reliability. In effect, biology is gradually becoming a programmable engineering field.

Two Strategic Approaches for Companies

Companies developing these models can generally choose between two business strategies:

  1. Developing IP In-House or via Partnerships
    In this model, companies integrate advanced models into a full-stack pipeline—from sequence design and structure prediction to experimental validation. Working independently or closely with biotech or pharmaceutical partners generates proprietary intellectual property (IP) and novel, high-value protein candidates (e.g., unique therapies or industrial enzymes). This approach emphasizes full control over the design-build-test cycle, directly driving competitive advantages through exclusive IP.

  2. Providing Accessible Tools (including OS models) as a Service
    Alternatively, companies can offer these advanced models as stand-alone tools. By developing user-friendly interfaces, APIs, and analytics, they allow customers to access cutting-edge protein development technology without building or managing a full pipeline and the associated risk. This model reduces friction and cost, enabling a broader customer base to benefit from AI-driven insights. Moreover, serving a diverse array of clients can generate a continuous stream of training data, which in turn improves the model’s overall performance over time.

While we like to back full-stack companies—most notably our investment in KoBold Metals—and we’re confident that substantial full-stack winners will emerge in the AI space, we recognize that AI in Bio differs structurally from AI in mining. Factors such as the larger scale of funding, a rapidly shifting technology landscape, and the necessity for deep domain expertise create unique challenges. These differences make it harder for us to confidently pick a single full-stack winner in the AI in Bio space. Instead, our investment style, expertise, and risk profile are better suited to a tools-as-a-service approach. This model leverages modular, accessible AI solutions that can serve a wide range of customers without requiring the heavy capital and integration commitments typical of full-stack pipeline. As technology and not bioscience investors, our investment style, expertise, and risk profile are better suited to a tools-as-a-service approach. 

  • It should be relatively easier to win a customer with a protein development tool than by forming a full pipeline partnership. Full pipeline partnerships require significant commitment and capacity from both parties, and they tend to involve higher friction and cost. In contrast, a protein tool can be offered as a service, which is less resource-intensive for the customer. It is crucial that the tools/model is designed to be as low friction as possible and fit in with existing workflows/tech stack.

  • Additionally, by providing a tool rather than a full pipeline, a company can serve a broad range of customers with varied protein needs. This diversity allows the company to continuously collect and integrate training data from multiple sources, which can significantly enhance the overall model's performance over time. Serving many different customers not only spreads the risk but also offers early-mover advantages, leading to greater long-term success compared to focusing on individual, high-commitment pipeline partnerships.

  • However, it's critical that the data from these interactions is retained and used to continually improve the model. Without this feedback loop, the tool's advantage is lost. In summary, a tool-based approach reduces risk, costs, and friction while offering scalability and long-term benefits through improved model capabilities.

  • It is important to note that even if a company is pursuing a tool-model initially, as long as they build out lab wet capacity (which we deem crucial for training data generation, even for tool-as-a-service businesses), there is an optionality to later on develop its own IP.

  • Be able to do value/outcome-based pricing 

A critical challenge with this model is how to capture value effectively. Outcome-based pricing—where fees align with measurable benefits—has proven successful in industries with quantifiable spending and benefits (which can be in the USD 100Ms for a single protein). Protein development, with its clear performance metrics, is well suited to this model. By tying pricing to outcomes, companies can align incentives, reduce risk for customers.

July

Thesis

Crypto Agent Payment Infrastructure

Enabling agents to transact

January 2025

Industry Transformation

As AI agents advance in autonomy and complexity, their ability to perform financial transactions becomes increasingly vital. Equipping these agents with digital wallets allows them to engage in economic activity: buying services, paying for APIs, managing cloud compute budgets, or even shopping online for real-world goods (e.g., purchasing data from other agents, restocking supplies for a small business, triggering payments for subscription-based services, or managing micro-payments for IoT tasks). Integrating decentralized payment systems offers a promising pathway to give AI agents the financial autonomy necessary to operate fluidly within these diverse contexts, rather than relying on legacy payment infrastructure, which is often gatekept, human-centric, and geographically fragmented. 

Decentralized payment systems provide several benefits for AI agents. They enable rapid transactions, often processed in real-time, which is crucial for agents requiring prompt payment capabilities. The reduction of intermediaries in these systems lowers transaction costs, making microtransactions economically viable. Furthermore, cryptocurrencies facilitate borderless transactions, allowing AI agents to operate internationally without the complications of currency exchange or cross-border fees. The use of blockchain-based smart contracts also allows for programmable, self-executing agreements, enabling AI agents to autonomously execute payments when predefined conditions are met, thereby enhancing automation and reducing the need for human intervention.

At the same time, we appreciate that implementing decentralized payment capabilities for AI agents introduces several challenges that necessitate thoughtful product design and strategic solutions. Regulatory compliance is paramount, as decentralized systems must navigate complex legal landscapes, including AML and KYC, which can vary significantly across jurisdictions. To address this, products should incorporate robust compliance frameworks that adapt to regional legal requirements. Security is another critical concern. Ensuring data privacy and implementing strong encryption and authentication protocols can protect against unauthorized access and data breaches. Finally, a user-friendly interface is vital for human operators to monitor and manage AI agents' financial activities effectively. 

By addressing these challenges through thoughtful design and strategic implementation, we believe that there is an opportunity for products to effectively harness the benefits of decentralized AI payments, paving the way for more autonomous and efficient AI-driven economic interactions.


July

Thesis

LLM Application Deployment: Resilience and Optionality

Today, the deployment of generative AI solutions into the enterprise, particularly large companies, has started and often exceeds expectations.

January 2025

Infrastructure Tech

The generative AI era presents an interesting paradox – strong confidence in the directional arrow of technological progress (the ever-expanding and evolving LLM blast radius) coupled with significant uncertainty around the economic implications, both macro and micro. Today, the deployment of generative AI solutions into the enterprise, particularly large companies, has started and often exceeds expectations

At the same time, there is wide agreement that while early applications are driving positive ROI, most organizations face a significant change management problem in fully incorporating them into existing operational frameworks - “there are no AI-shaped holes lying around.”  For many enterprises and their executives, this has led to a “flight to trust,” with large consulting firms benefitting from enterprise appetite to utilize generative AI. This uncertainty around future enterprise workflows is also furthermore reflected in the observation that most AI startups that have found traction have done so with an anthropomorphized approach, “selling the work” in areas like legal support and accounting – essentially building an end-to-end AI replica of what customers have come to expect from human + software. 

While we think great business can be built there, this can’t be all. We believe that as organizations and society develop a better understanding of AI, we build core workflows around this new paradigm, constantly adapting existing organizational structures and coming up with entirely new ones. We broadly differentiate between resilience and optionality and believe that both areas provide opportunities for interesting models and platforms to emerge. 

Resilience focuses on enabling existing companies forced to adopt AI (in secure) and effective ways to stay competitive. As described above, these companies already have processes and employees. Both might have a hard time adopting. 

As with any complex system, we believe there is a unique opportunity by looking at the smallest unit in organizations - employees. While executives and consultants try to implement AI policies top-down, high-agency individuals (armed with ChatGPT, Claude, Cursor, and the latest tools) are constantly discovering productivity enhancements built around their idiosyncratic workflows, often utilizing these tools without explicit permission. We see an opportunity to push much of the work of identifying enterprise-specific best practices to these forward-thinking individuals and for a novel platform focused on this bottom-up approach to AI resilience to emerge.

In the process, such a platform could kill two birds with one stone. It provides a starting point for better data controls and security processes to manage risk while helping companies understand the financial implications (productivity improvements, cost structure changes, etc.) of continued AI deployment. 

Furthermore, monitoring and visibility in AI use by employees help enterprises gain insight into best practices (making AI fit into existing holes) that can be rolled out across the organization. The big opportunity that emerges from this wedge and model for enterprise trust building is that such a platform positions itself as we move toward a world of “spontaneous software” and possibly, “AI as the last employee” – similar to how Workday came to define ERP for the “digital transformation” era. 

Optionality focuses on building companies around novel organizational structures with a view to the upside, native to AI workflows and not possible before. 

This is an extension of what we previously wrote on “spontaneous software and SaaS on-demand”. In line with a recent post from Nustom that draws parallels from the autonomous vehicle market to propose the idea of a self-driving startup, we believe that there is a massive opportunity here for companies that operate like mobile game studios, making use of the reality that software is increasingly cheaper to write and startups cheaper to run with AI getting better and more capable at both. We expect these companies will excel at rapid experimentation and iteration, consistently positioning themselves ahead of the capability curve to try to catch lightning in a bottle (hits-driven) or/in combination with being long-tail driven with a large number of small cashflow-generating businesses under one roof. 

July

Thesis

Digital Olfaction

Some of AI's greatest opportunities lie in its application to understanding and transforming the physical world.

January 2025

Infrastructure Tech

Some of AI's greatest opportunities lie in its application to understanding and transforming the physical world. We believe in the potential of convolutional neural networks, GNNs, and transformers to help us deal with this complexity and make sense of the world in ways that we have not been able to (we internally call these "expansion" use cases). This theme runs through several of our investments, most notably KoBold Metals. 

We believe that digital olfaction and a better understanding of how molecules make us smell are among those areas. Scent, due to its complexity, is our least understood sense. Novel approaches to machine learning, such as GNNs, have been proven to cut through this complexity and beat the human nose at scent profile classification based on molecule structures. Osmo, the company at the forefront of this research, has proven that it can utilize this understanding to develop novel scents. It is reasonable to assume that this technology will enable the faster development of novel molecules at lower cost and at scale. 

In 2023, the global aroma chemicals market was valued at approximately USD 5.53 billion (unclear if this also includes produced chemicals vs. only IP). This market is essentially dominated by a few large players:  Givaudan, Firmenich, IFF, and Symrise. All these players are fully integrated, meaning they both develop scent molecules (IP) and manufacture them. It is unclear how much value is in the pure IP, but some tailwinds could favor the emergence of a novel AI-enabled player focused on novel IP. In 2023, a class action lawsuit was filed against major fragrance companies Givaudan, Firmenich, IFF, and Symrise. This followed multinational (Switzerland, Europe) antitrust investigations into suspected price-fixing by these companies initiated earlier the same year. Moreover, there is a marketable shift in the industry to focus on sustainable molecules that don’t require scarce resources and have no negative health effects. 

The ability of AI to generate such molecules that either have novel scent profiles or are similar to existing ones without having negative externalities (e.g., associated with production health) is likely a unique fit for these AI models. We expect that to create maximum value, such a model (or suite of models) would likely need to be capable of 1) the ability to model molecule interaction to create a whole scent, 2) an understanding of constraints (e.g., toxicity, costs) and 3) the ability to assess the producibility of these molecule sets at scale. 

Moreover, we see substantial potential for market expansion. Suppose these AI systems are capable of identifying, mapping, and predicting the behavior of scent molecules, given certain hardware advancements (essentially a chip capable of detecting, analyzing, and recreating scent) are made. In that case, several new application areas emerge: These can span from environmental monitoring to medical diagnostics, where AI can detect disease biomarkers through molecular analysis to consumer applications such as capturing, reproducing, and sharing scent online. White is hard to quantify, but it is reasonable to assume that there is substantial option value.

July

Thesis

Space Debris

The number of objects we are sending to space is growing exponentially.

January 2025

Infrastructure Tech

The number of objects we are sending to space is growing exponentially. Thanks to SpaceX, launch costs have fallen 80-90%. While it took nearly 60 years to put 2,000 satellites in orbit, we launched 3,000 in 2023 alone. 100k satellites are expected to launch by 2030, marking a further increase in the complexity of space operations. 

As old space assets deteriorate and more are launched, collisions are inevitable, particularly as a result of space debris. Satellites already make 100,000+ maneuvers per year for collision avoidance. Losses due to collisions in low earth orbit were estimated at ~ $100m in 2020. Since then, we have at least tripled the satellite population.

While responsiveness is improving (e.g. edge GPUs to enable on-board autonomy), hundreds of billions of dollars in assets are and will be exposed without i) precise monitoring, ii) proactive defense systems (beyond trying to move out of the way), and iii) adequate financial risk management (i.e. insurance models). 

While it is easy to forget amid the current space market momentum, the industry is still walking a fine line – something that seems to have motivated Elon’s all-in effort in the direction of Donald Trump’s election. As the nuclear industry has demonstrated, the public perception pendulum in highly sensitive industries can swing toward the negative (for decades) with only a small number of high-profile failures. Space debris is the type of problem that, left unchecked, poses a Three Mile Island-style risk for the industry. Also like most “waste” related problems, there is often not a strong enough incentive for any single actor to solve it until it is too late. 

The fragmented incentives and control mechanisms in addressing space debris are evident in the current regulatory frameworks. 

The United States is a patchwork of policies, with agencies like the FAA, FCC, and NASA each taking different approaches to limiting or removing space waste. Europe’s approach has been more comprehensive with the European Space Agency (ESA) developing the "Zero Debris Charter," aiming to prevent the generation of new orbital debris by 2030. As of October 2024, 110 countries or entities have joined the charter, and discussions are ongoing with major satellite operators for participation. 

Despite these initiatives, the absence of a binding international legal framework leads to a "tragedy of the commons" scenario, where individual actors may lack sufficient incentives to invest in debris mitigation (opting instead to accelerate commercial advances amid increasing competition, resulting in increased collective risk.

International cooperation around debris is also threatened by geopolitical posturing. Without better visibility and defense mechanisms, nation-states will always have plausible deniability around the destruction of important satellites and space infrastructure (“it wasn’t us, it was debris”). Since even 1mm fragments of debris can disable a satellite entirely, this is not too much of a logical leap.

We believe that solving the problem of space debris creates an interesting wedge for companies to eventually become critical infrastructure for space security and risk management.

July

Thesis

AI-enabled Business Planning

Giving organizations on steroids what Cisco called its ‘virtual close’ advantage more than 20 years ago. 

January 2025

Industry Transformation

The generative AI era presents an interesting paradox: strong confidence in the directional arrow of technological progress (the ever-expanding and evolving LLM blast radius) coupled with significant uncertainty around the macro- and microeconomic implications. Acknowledging this uncertainty, we expect three things to happen as we move toward a world of increased AI and agent usage by organizations and, possibly, a trend towards “AI as the last employee.”

  1. Data and information will be processed much faster, leading to real-time insights and decision support. 

  2.  The metabolic rate of organizations is set to go up as feedback loops from planning to action become faster.

  3. Organizations will face substantially different resource and capital allocation decisions. Both require an orchestration, planning, and decision layer purpose-built for these changing dynamics. 

All of the above requires an orchestration, planning, and decision layer purpose-built for and enabling these changing dynamics. As a result, we see an opportunity to build an AI-enabled business planning platform with substantial optionality to become an integral part of the roll-out and management of increasingly powerful AI systems. Giving organizations on steroids what Cisco called its ‘virtual close’ advantage more than 20 years ago. 


July

Thesis

Data-Driven Infrastructure Management

There is an opportunity for new players to emerge at the intersection of two historically distinct types of businesses: infrastructure inspection and architecture, engineering, and construction (AEC).

January 2025

Industry Transformation

One of our core thesis around AI in the physical world is that novel data generation can drive substantial value creation. Robotics, drones, and sensors are used for inspection to fit right in. Providing customers with high-value (and revenue-generating) inspection services enables unique data collection at scale. As a result, we believe there is an opportunity for new players to emerge at the intersection of two historically distinct types of businesses: infrastructure inspection and architecture, engineering, and construction (AEC). The inspection business generates the data that enables high-value AI-enabled services in the design, construction, and maintenance phases of a project. 

We are interested in investing in companies that have found a unique wedge into the market to build large sets of novel and proprietary data that enable a flywheel of higher-quality services. We believe that the category leader in this space can create an agnostic platform compatible with different robot types from various manufacturers to deliver an increasing range of such services without needing hardware development. 

More effectively managing critical infrastructure assets through technology-enabled inspection, dynamic monitoring, and proactive intervention represents a crucial lever in stabilizing risks presented by emerging security, climate, and energy challenges, promoting public health and safety, and driving more effective capital allocation across the public and private sectors. 

Every four years, the American Society of Civil Engineers (ASCE) releases the Report Card for America’s Infrastructure. It details the condition and performance of the nation’s infrastructure. Its most recent report, released in 2021, gave the United States a C- grade, highlighting a widening investment gap that the ASCE estimates will cost each American $3,300 per year by 2039 (USD 90B+ total). In the years since the report, pressure has increased thanks to challenges imposed by extreme weather events, substantial changes in the global energy mix, and an increasingly tenuous national security situation.

Private infrastructure, from energy plants to commercial shipping, is fighting against the challenges and economic losses associated with system outages. For example, a study by the Lawrence Berkeley National Laboratory and the U.S. Department of Energy estimated that power interruptions cost the U.S. economy about $44 billion annually.

Solving these problems at scale requires moving away from manual inspection and towards more scalable technology-enabled approaches. These are substantially safer and dramatically generate more data that can serve as the foundation for appreciably higher-quality decisions. 

At the same time, public and private asset owners are starting to realize that inspection and data collection ideally begin at the outset of large projects and during construction. That way, decisions can optimized, mistakes can be identified, and one has a digital foundation for future inspections.


July

Thesis

Cross-Cloud Cost Optimization

Moving beyond cost tracking

December 2024

Infrastructure Tech

The FinOps and cloud cost optimization space is rapidly evolving beyond simple cost tracking. We observe a few trends: 

  • FinOps now encompasses a broader scope, including SaaS management, AI workload optimization, hybrid cloud governance, and even on-prem IT cost control. Meanwhile, AI workloads have emerged as a significant cost driver, pushing FinOps to expand its focus. As enterprises invest in machine learning and generative AI, managing the cost of GPU-heavy training and inference processes is becoming a priority.

  • As cloud spending continues to rise, organizations are shifting from merely monitoring expenses to implementing proactive cost governance and automation strategies. A key driver of this transformation is AI-powered automation. Companies are no longer satisfied with dashboards that only report costs—they need real-time anomaly detection and hands-free cost optimization.  

  • Multi-cloud and hybrid cloud optimization are also gaining traction, as companies increasingly operate across AWS, GCP, Azure, and on-prem environments. Traditional cost tools built for a single cloud provider are no longer sufficient. This can go as far as intelligently placing workloads across clouds for optimal performance and cost efficiency. This shift highlights the growing demand for solutions that seamlessly manage costs in complex, multi-cloud environments.

In our view, all of them make it necessary for independent third-party tools to emerge vs. simply using the tools provided by cloud providers ( although they have been pushing hard to develop these). As of now, we are uncertain what the FinOps stack might look like, but essentially, we see two scenarios: 

  1. Full-stack providers that can cover all the bases outlined above. This could either be one of the emergent players or one fo the existing legacy players. 

  2. Different stacks. Primarily, one layer is for observation/aggregation with a lot of breadth, and one is for execution automation on the technical level.

July

Thesis

Precision Wellness

Better health outcomes—delivered at lower costs and with greater accessibility—are fundamental to economic growth and human flourishing.

December 2024

Fundamental Consumer

Better health outcomes—delivered at lower costs and with greater accessibility—are fundamental to economic growth and human flourishing. Preventative healthcare represents our largest lever to unlock better outcomes at scale. However, the centralized control, opaque incentives, and high friction that characterize today’s healthcare system hold back progress. It is not built with technological advancement in mind and fails to meet the standard of experiences consumers have elsewhere. 

As the prevailing model fails to evolve, a new paradigm— precision wellness—is emerging. This transformation mirrors the forces that transformed media, finance, and commerce by redistributing the power over the experience to individuals. From top-down institutional mandate to bottom-up iteration, from one-size-fits-all solutions to hyper-personalization, from controlled to in control.

The wellness-driven consumer is at the center of this shift. Motivated by the same “divine discontent” that has continuously sparked consumer innovation across the economy, their demands for scientific rigor and an elevated user experience are accelerating the growth of the precision wellness opportunity. 

  • The next phase of GLP-1 adoption, perhaps the most important catalyst of this overall opportunity, appears increasingly driven by consumer-centric companies; 

  • The vast array of cheap, passive sensors integrated into phones, watches, and headphones creates longitudinal data that was previously unavailable, while clinical-grade modalities on consumer devices build trust in health-focused technology and reorient expectations toward continuous, rather than episodic, monitoring and intervention; 

  • The "mainstreaming of biohacking" is evident in the adoption of CGM among non-diabetics, the growth in advanced biomarker testing, whole genome testing, full-body MRIs, and the increasing demand for personalized, science-driven health optimization protocols.

As more people experience the feedback loops of better health and deeper health understanding – for themselves and those around them – their engagement compounds. This flywheel effect, combined with traditional healthcare's eroding monopoly on trust and access, creates a strong why now for emerging companies capable of integrating science, technology, and brand building. 

We also recognize that precision wellness has a significant blast radius effect, with aggregators, especially Apple, at the center. Data gravity, vast resources, and an incentive to commoditize complementary solutions make it unwise to compete directly. Thus, we are most interested in companies building non-device-centric models for distributed discovery, diagnostics, and delivery. This includes:

  • Next-gen healthcare providers integrating novel diagnostics and data collection into full-service care delivery (going beyond simply digitizing traditional models).

  • Knowledge networks (content + community + coaching) that use personalized insights to help guide users through specific niches of their precision wellness journey, creating a layer of trust in a consumer area that can be overwhelming due to a low signal-to-noise ratio.  

  • Companies using biological insights, often via at-home testing modalities, as a wedge to build up proprietary data sources, trusted brands, and communities.

July

Thesis

“Scale as a Service” for the Bio-Industrial Economy

Over the past 25 years, the emergence of "scale-as-a-service" has powered multiple "invisible innovation" waves in the software world.

November 2024

Infrastructure Tech

Over the past 25 years, the emergence of "scale-as-a-service" has powered multiple "invisible innovation" waves in the software world. Infrastructure begets applications begets the need for more infrastructure, and so on. Platforms like AWS, Shopify, Stripe, and Twilio build scale on behalf of their customers in important but non-core, functions and enable access via API. Over time, emerging companies bundle infrastructure from various scale-as-a-service providers, making it possible to go bigger faster or target previously unaddressable niches. Thanks to the programmatic nature of interactions, scale-as-a-service solutions minimize coordination costs and maximize control, enabling precision execution that aligns with a company’s desired speed, scale, and budget.

As scientific breakthroughs make biology more programmable, the why now for Scale-as-a-Service models is approaching a tipping point – but with important nuance. While AI represents a powerful enabler of new product and process design, the reality of biological complexity means we first need better tools and data to model and manipulate processes. As Niko McCarty notes, even the most significant AI breakthrough, AlphaFold, reveals the gap between static prediction and biological reality. Scale-as-a-Service providers can help bridge this gap by industrializing high-quality, standardized data collection across the design and production process. A 2023 study of biomanufacturing bottlenecks found that companies consistently struggle with purification, continuous processing, analytics, and expertise gaps - all areas where specialized infrastructure providers can play a Shopify-like role.

Meanwhile, dominant macro trends like the energy transition and US-China competition are pushing companies and countries towards more sustainable and domestic production models. Half of the world’s largest companies are committed to net zero, “reshoring” continues to grow in popularity on earnings calls, and the Biden Administration has set targets like producing 30%+ of the US chemical demand via biomanufacturing pathways within 20 years.

While first-generation companies like Ginkgo and Zymergen have struggled massively, select second-generation companies like Solugen show signs of staying power. If (still a big if) these next-gen companies prove the economic viability of bioproduction, we expect to see several successful scale-as-a-service providers emerge. These companies will become foundational platforms for trillion-dollar industries like chemicals, materials, energy, agriculture, CPG, and food & ag where bioproduction remains pre-commercial scale. Like the internet, the invisible innovation waves created by this infrastructure application cycle may show that the market for bio-enabled solutions is larger and more diverse than we could have imagined a priori.

We expect most successful scale-as-a-service providers to start with asset-lite approaches. Expanding upon Chris Dixon's "come for the tool, stay for the network" insight, these companies will initially aggregate supply, demand, and attention through useful data and coordination tools. From there, they will evolve into market orchestrators, connecting buyers with sellers and unlocking new capital flows. Eventually, many will build out physical infrastructure at scale, becoming the operating systems of the bio-industrial economy.

July

Thesis

Prediction Markets

Prediction markets represent a fundamental transformation in how society aggregates and values information.

November 2024

Infrastructure Tech

Prediction markets represent a fundamental transformation in how society aggregates and values information. As trust in traditional institutions continues to erode, prediction markets will emerge as an efficient mechanism for pricing risk, surfacing truth, and reshaping decision-making across the economy.

Throughout the history of technology – particularly the internet – important platforms often begin in legal grey areas, where user demand outpaces regulatory frameworks. From eBay to Uber, Airbnb, and Spotify, the most impactful companies solve problems that existing systems cannot address – or, more precisely, where prevailing incentive structures baked into law by incumbents actively resist progress. 

While incumbent resistance will be significant, we believe there is an opening for new mechanisms of collective intelligence that align incentives toward accuracy and accountability.

This transformation aligns with our broader theses around the role of better data (often physical data) driving a shift toward more dynamic and precise information-centric business models. In the same way that pricing for digital tools is evolving from static per-seat licenses to value-based outcomes, prediction markets represent a step-function improvement in how we price and distribute information. Once people experience the power of real-time, market-driven signals – whether in election forecasting or project management – we see no going back to traditional polling or planning systems. Thus, we believe the long-term opportunity extends far beyond gambling or speculation – it's about fundamentally improving how societies and organizations make decisions and allocate resources. 

Amidst the “moment” prediction markets are having in the wake of the US presidential election, critics rightly point to fundamental challenges: the subsidies required to bootstrap liquidity in niche markets may prove prohibitively expensive, and many use cases beyond elections and sports could struggle to attract meaningful participation. While these are serious concerns, we believe they echo historical skepticism of other transformative markets. For example, at the outset of equity markets, stock trading was seen as gambling and was dominated by "bucket shops" where people placed bets on price movements without owning shares. Such activity was seen as zero-sum, manipulated, and socially destructive. Yet over time, infrastructure emerged to make securities trading safer and more accessible: mutual funds, for example, transformed speculation into investment, regulations built trust, and exchanges standardized trading.

A similar story played out in e-commerce. In the mid-1990s, conventional wisdom held that consumers would never trust online platforms with their credit card information. Amazon launched in 1995 to widespread skepticism, yet by creating infrastructure that built trust and reduced friction, e-commerce became not just accepted but essential. 

Our hypothesis is that we are in the 1995 - 2000 period for prediction markets – mass-market cultural awareness is growing and momentum is clear – but market penetration is little more than a blip in the overall picture. In the same way that mobile devices and social networks (among other things) provided the technological catalyst for deeper e-commerce penetration, we see AI (and AI agents) as a critical enabler of the next wave of prediction market growth. For example, by creating liquidity in thousands of micro-markets, AI has the potential to help users take more sophisticated portfolio approaches and contribute to a “utilization cascade” that shifts prediction markets from perceived gambling into new “standard” tooling for information discovery.

Success in this category will likely follow e-commerce's growth trajectory. While early adopters drove initial growth, widespread adoption required infrastructure that created trust and reduced friction. Today's prediction market leaders will similarly need to build both consumer-facing brands and backend capabilities. We expect to see an "Amazon of prediction markets" emerge – potentially Kalshi – that combines direct consumer reach with infrastructure that powers other platforms. This will enable an ecosystem of niche players targeting specific verticals or user segments.

A key question remains around where value ultimately gets captured. Just as e-commerce value accrued to new platforms, internet-native brands, and incumbents who successfully adapted (e.g. Walmart), prediction market infrastructure will create several winning archetypes. Beyond pure-play platforms like Polymarket, existing media and financial platforms that already own distribution – from ESPN to Bloomberg – could emerge as powerful players.

The opportunity extends far beyond any single vertical. By expanding the surface area of possible transactions, prediction markets could enable new types of information exchange that are hard to imagine a priori. Winners will start by dominating specific verticals where information asymmetries create clear value propositions (Books:Amazon::Elections:Kalshi), then expand into adjacent use cases as users become comfortable with the model. Those who can navigate the regulatory environment while building trusted brands will become essential infrastructure for the information economy.

July

Thesis

AI-driven SaaS Replacement

LLMs have started and will continue to bring down the costs of writing software.

November 2024

Infrastructure Tech

As we discussed in many other category thesis, we believe that in the AI era, many of the laws of physics that existed around technology and business models are changing and much has been written about the proclaimed ‘End of Software.’ The argument goes something like this. 

LLMs have started and will continue to bring down the costs of writing software. This leads to a world where software is increasingly created for N of 1 customers and will be easily mendable over time. Ideating and building (or prompting) this software will increasingly shift from developers to non-technical users. 

As software creation becomes cheap, this poses a challenge to traditional software companies whose core value proposition (development, maintenance, and hosting of software, customization, and customer support), business model, and moats are rooted in the ability to leverage initial investments into brands, standards, and free cash flow into features and app ecosystems that catered to a heterogeneous customer base with little incentive to go elsewhere due to switching costs. Switching becomes substantially more attractive if the ‘perfect,’ highly personalized software is the alternative. This fundamentally challenges the business model of these companies. 

With that established, the key question is what the new paradigm might look like. 

There is a vision that if LLMs and agents have access to all our data, software and interfaces will be generated in real-time, on demand, and only emerge when they are needed. Fully in the control of users (or their LLMs/agents), this software costs only as much as the computer required to build it. While this vision is undoubtedly appealing, there are a few missing pieces: 

For one, we assume that it will take some time for models to generate end-to-end software applications. Until this is possible, someone needs to be responsible for ensuring the software works. This is not only from a technical perspective but also from a usability perspective. Just because a feature can be easily built doesn’t mean it should be built. Until models can fully understand context (at which point it is questionable why there would be even a need for human-readable software), domain-specific product expertise will be required to build useful products for specific use cases. Moreover, customer support and the need for enterprise customers to want somebody to lean on when things go wrong will likely remain.  

As a result, we believe there is an opportunity to build a company here. This company will have certain features: 

  • Provide a platform that offers guidelines to non-technical users to create applications for their specific needs 

  • Have an in-house team of developers to guarantee that software is functional when LLMs fail 

  • Create a Platform / App Store-type thing that enables the developer to publish their applications and enable others to use them 

  • Data platform and SDKs that enable matching to good features either developed already or novel and easy integration of these features

  • Business Model: 

    • Initial Development - one-off 

    • (Data) Platform - ongoing 

    • Developer Platform / App Store - marketplace take rate

July

Thesis

AI-Driven CAE

Modern CAE's transformation combines AI and deep learning, drastically improving physical design efficiency, creating opportunities for new hardware-makers, and challenging established OEMs.

October 2024

Industry Transformation

Computer-aided engineering (CAE) tools are the backbone of modern product development. As a result, they underpin much of our modern physical environment. Today, several key catalysts are shifting the playing field and creating long-awaited opportunities for new physical design companies to emerge and scale. 

  1. There is immense potential to commercialize the growing body of relevant research.  Computer-aided engineering (CAE) traditionally utilizes numerical solvers to simulate and analyze complex engineering problems (e.g., Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), and Multibody Dynamics (MBD)). Depending on the complexity of the problem and the computational resources available, CAE simulations can take anywhere from a few minutes to several days or weeks.

    In recent years, there has been increasing research done in training deep-learning models on simulation data to create so-called deep-learning surrogates. Deep learning-based surrogate models are computational models that leverage deep neural networks to approximate complex physical systems or simulations, providing fast and efficient predictions while maintaining reasonable accuracy (i.e., run complex simulations in seconds). Methods include data-driven (e.g., GNNs, NOs, RNNs) and physics-driven (e.g., PINNs) deep learning and generative models. Technology maturation makes the opportunity ripe for the right team, with access to data, and the ability to learn from a constant feedback loop of testing these models against a verifiable physical reality to push the boundaries of these methods. Combining these into easy-to-use workflows can fundamentally change how engineering (simulation, in particular) is done at scale. 

    An example from research by McKinsey on this: "One company in the power-generation sector, for example, used the approach to optimize the design of large turbines for hydroelectric plants [...] the company reduced the engineering hours required to create a new turbine design by 50 percent and cut the end-to-end design process by a quarter. Better still, the approach generated turbines that were up to 0.4 percentage points more efficient than conventional designs". This is the type of upside that is necessary to get the attention of potential customers in the space. 

  2. We are in the early days of a hardware supercycle. The top-down push by Western economies to reindustrialize and redevelop production capacity in the name of national security, climate change mitigation, and economic growth has driven capital and talent toward physical world innovation. With role models like Tesla, SpaceX, and Anduril leading the charge, hundreds of well-funded aerospace, energy, manufacturing, robotics, and transportation companies are emerging with demands for a modern physical design and engineering stack. This increased competition is pushing incumbent OEMs to experiment with new solutions for the first time.  

  3. AI-native business models create a wedge for new players. A shift in business models has thus far marked the deployment of advanced AI into foundational industries – KoBold owns its own exploration assets while Isomorphic Labs shares in the economic upside of its IP. Similar value-based, consultative, and/or “forward deployed” approaches – a partner rather than software vendor relationship – could create the footing for new players to gain footing with large customers and expand over time, avoiding the all-or-nothing sales cycles that have long protected existing leaders. 

The combination of evolving customer demands, novel research, and new business models have formed the basis for an entirely new paradigm of computer-aided, AI-driven design and engineering tools. They are unlocking faster, cheaper feedback loops, shifting workflows from linear to parallel, and unlocking emergent use cases. This increases both speed and quality in a way incumbents struggle to defend against. 

July

Thesis

The Manufacturing Action Layer

As the cost of adding sensors to anything and everything in the manufacturing process has decreased significantly, the amount of data produced in the factory environment has exploded.

October 2024

Industry Transformation

The anticipated (and much-needed) manufacturing renaissance in the US and Europe – sparked by rising competition with China and a movement to invest in expanding domestic production capacity in the wake of pandemic-induced supply chain breakdowns is hampered by several deficiencies. Among these limiting factors is the challenge of turning vast amounts of disparate industrial data into actionable insights and execution that drive true productivity gains

As the cost of adding sensors to anything and everything in the manufacturing process has decreased significantly, the amount of data produced in the factory environment has exploded. However, conversations with people across the manufacturing landscape make it clear that the impact of production digitization continues to underperform expectations. 

More than a decade into the Industrial IoT wave, most data from the manufacturing process ends up – at best – brought together into static Excel files. And while platforms like Palantir’s AIP promise rapid transformation, the ground reality is that data from different systems continues to live only in the heads of individual operators – a critical risk in an industry with massive turnover and an aging workforce. The VP of Industry 4.0 at a ~ $5b market cap automotive supplier recently remarked that they still lack the visibility to know whether a machine is even running in a factory without calling someone on the ground.

Incumbent software offerings in manufacturing are often stitched together over years (even decades) of acquisitions and integrations, resulting in a mess of fragmentation technical debt, information silos, and process bottlenecks. 

Given this backdrop – and the macro tailwinds that will continue to drive demand for next-gen manufacturing solutions – our initial hypothesis is that there are two interesting angles of attack for new companies: 

  1. Modern industrial control and execution systems capable of aggregating data across modalities and business systems, automating mission-critical operation and production activities, and assuming responsibility (via outcome-based pricing models) for driving efficiencies.

  2. Software-defined manufacturers aiming to displace incumbent manufacturers entirely through more efficient end-to-end approaches in specific verticals/use cases. 

Both models face challenges. The “base rates” for selling impactful digital solutions to manufacturers are mediocre at best and the companies that have reached scale – platforms like Cognite, Palantir, and Samsara – have significant distribution advantages that must be overcome by more narrow emerging entrants. For the “full stack” players, the scale potential is clear but remains to be seen whether venture capital is the right financing tool (“CNC machines with startup branding” is how one person described one of the companies to us).

July

Thesis

Nuclear Supply Chain

Technology deployment cycles – from the railroads to the internet – have long been characterized by dramatic boom and bust cycles. The nuclear cycle is picking up pace.

September 2024

Industry Transformation

Working Thesis

Technology deployment cycles – from the railroads to the internet – have long been characterized by dramatic boom and bust cycles. And while overreliance on analogies is dangerous, these historical precedents provide one lens through which we can contextualize the current moment. 

Today, the AI boom is drawing parallels to the 1990s dot-com and telecom bubbles, with massive capital expenditures, promises of exponential growth, and early productivity gains. On the horizon, the potential of general-purpose robotics even resembles the iPhone-driven mobile revolution that followed almost a decade after the dot-com bust. 

But the differences between the two eras are equally striking. Today, incumbent technology companies possess more structural power over the ecosystem than 30 years ago, suggesting perhaps less overall volatility at the expense of dynamism – i.e. The Magnificant Seven serve as “hysteria dampeners” thanks to massive balance sheets and distribution networks. And while opportunism in AI is certainly present, the technical barriers to entry needed to build a competitive foundation model (and the pace of customer feedback) are substantially higher than building an ISP during the DotCom frenzy.

However, the most consequential difference between the two eras may be the central role of energy – namely the re-emergence of nuclear power – in today's AI boom, particularly with the backdrop of rising great power competition and the ever-present specter of climate change.

Unlike the telecom infrastructure of the dot-com period (and data centers in today's cycle), which serve singular purposes, the expansion of nuclear infrastructure addresses multiple critical challenges. First, it promises to play a significant role in solving for the energy intensity and reliability demands of AI data centers. This is a problem we are looking at from several angles – nuclear and other forms of production, efficiency (via our industrial action layer research), and finally via an exploration of better distribution and grid resilience technologies.  

Beyond serving AI data centers, nuclear power (along with the other categories highlighted), meets the vast need for clean baseload power to accelerate decarbonization and the push for increased energy security amidst heightened geopolitical risk. 

As a result, nuclear’s positioning as a one-size-fits-all solution to many of our most pressing concerns – and thus its (theoretical) resilience to the fortunes of any single macro factor – makes it an attractive “picks and shovels” opportunity perfectly in line with the three major economic supercycles of the next several decades (AI, climate, and national security) – provided the current momentum can overcome generations of cultural and political baggage.

This baggage is complex, equal parts social, economic, and regulatory with each reinforcing the other in a vicious cycle. High-profile accidents and proliferation risk have dominated the social consciousness for 40+ years. This, in turn, influences regulation which increases the red tape related to the development and deployment of safer, more effective modern solutions. As process knowledge is lost and talent leaves the industry, costs spiral even higher and we are left with the current state of affairs. 

Despite its history as a safe, clean, and cost-effective technology, nuclear costs have jumped 33% higher while the cost of new solar generation continues to plummet (over 90%). The latter is, to be clear, a positive – we are pro-energy abundance more than we are wedded to a single approach – and showcases the power of technological learning rates if unleashed. 

Current and Future Market Structure 

Today, the narrative – and the fundamentals – around advanced nuclear are finally shifting towards the positive across several dimensions.

  • Russia’s full-scale invasion of Ukraine provided a clear focal point for the importance of energy security and the role nuclear energy can play in decoupling from Russia. Between 2021 and 2022, the percentage of Europeans at least somewhat in favor of nuclear energy jumped substantially – in Germany from 29% to 44%, in France from 45% to 55%, and in the UK from 39% to 53%.

  • Energy has become the core bottleneck to scaling AI. While GPU scarcity dominated the preceding couple of years, everyone from Mark Zuckerberg to Sam Altman believes the next substantial step forward requires energy-related breakthroughs. As a result, big tech companies have become the “urgent buyers” necessary to drive market transformation. Microsoft’s actions signal a clear belief in the long-term need to expand nuclear capacity. Its recent 20-year power purchase agreement with Constellation, which will revive the Three Mile Island facility, is as symbolically important as it is economically.   

  • The capital markets are responding swiftly to this step change in demand, with financial institutions including BofA, Morgan Stanley, and Goldman backing a COP28 goal of tripling nuclear capacity by 2050. The commitment to triple capacity also came with support from the US, UK, Japan, Sweden, and the United Arab Emirates.

  • Regulatory support has not been limited to COP commitments. In the United States, for example, President Biden recently signed into law the ADVANCE Act, which aims to streamline licensing, promote R&D investment, and contribute to workforce development.

  • This follows on the heels of (tepid) progress on the deployment front. In the United States, Vogtle 3 and 4 are finally online, years late and billions over budget. Still, the finalized designs, established supply chains, and trained workforce should contribute to less expensive future deployment. This summer, TerraPower began construction on its advanced reactor facility in Wyoming. Meanwhile, SMR certification is building momentum in France as companies like NewCleo and Jimmy Energy look to deploy first-of-a-kind reactors by 2030.  

  • Finally, the characteristics of SMR and AMRs coupled with the aforementioned demand dynamics have ignited venture interest in the space. Smaller form factors that can be deployed more flexibly and with a broader range of financing options have eased some concerns about the venture scalability of such projects. As a result, dozens of companies have been funded in recent years. Today, over 100 SMR and AMR designs are being developed at various stages and with different timelines across the world. 

Key Early Assumptions / Potential Catalysts 

The improved environment around nuclear power leads us to a few critical questions, based on important assumptions about where scarcity and abundance sit in the nuclear value chain.

  • Assumption 1 → The timelines for most advanced nuclear projects are at least 7+ years out, likely longer if history is a guide. This may not align with our time horizon unless we can identify intermediate value inflection steps that create opportunities for exit, etc. similar to the life sciences ramp-up.

  • Assumption 2 → The crowded nature of SMR/AMR technology development (abundant capital and attention at that part of the nuclear value chain) and the lack of a clear set of winners should push us to areas of relative scarcity where solving key bottlenecks can accelerate the market overall (i.e. turning those hundreds of companies into customers immediately).

So where is there scarcity in the market that aligns with our investment horizon and approach? Three areas stand out initially, with each meriting a separate deeper dive should conversations with experts, operators, and founders push us in a certain direction. 

Fuel → Russian (and increasingly Chinese) dominance of key fuels and processing steps risks putting the West in a vulnerable position, which echoes the overreliance on cheap gas that contributed to the Ukraine invasion and the European energy crisis. Significant efforts are underway to de-risk the nuclear fuel supply chain. The US Congress passed a bill to limit the import of Russian uranium, while the “Saparro 5” (Canada, Japan, France, the UK, and the US) announced plans to jointly invest over $4b to boost enrichment and conversion capacity over the next three years. 

The biggest risk to decoupling from Russia has been HALEU (high-assay low-enriched uranium), which advanced reactors are being developed to run. Until the Russian invasion of Ukraine, Russia was the only place with a plant licensed to produce this material. Companies like Centrus Energy and a still-under-the-radar Founders Fund-backed startup are targeting this bottleneck, which could be an important enabler of the broader advanced nuclear space. 

Project Development → Over the last two years, much of my work has centered on how to best help “emerging industrial” companies scale more effectively. While my early assumptions were largely centered on the need for new financing models, the critical bottleneck to deploying new projects across energy, materials, and manufacturing often turned out to be capable of on-the-ground project development and execution. Given the deterioration of the nuclear talent base across most Western countries, this problem is even more severe. 

A key problem with effective (i.e. on time, on budget) project development is the fragmentation of the subcontractors needed to build a project end-to-end. Companies are aiming to solve this through a reactor-agnostic platform for nuclear project execution. Through a comprehensive oversight strategy, which includes taking direct control over supply chain management, sourcing, workforce coordination, and financing required for constructing power plant fleets the company hopes to do for nuclear what SpaceX did for launch. Others are building fully modular, factory-made systems innovating on the delivery and development model rather than at the reactor level. 

Waste → Waste remains perhaps the most politically and socially charged aspect of nuclear energy, leading to decades of information warfare despite the relatively boring nature of the problem. Historically, the commercial incentives to store waste weren’t particularly attractive, making it a largely political endeavor. 

Today, countries and companies around the world are starting to see opportunities to turn waste from a burden into an opportunity through recycling.

July

Thesis

AI-enabled Services

We see an interesting opportunity in the context of end-to-end AI service providers. 

September 2024

Industry Transformation

We see an interesting opportunity in the context of end-to-end AI service providers. 

We believe that in certain cases, AI sold as as SaaS product can neither unlock its full potential nor allow developers to capture the value they are creating. This has a few reasons:

  • The limited reliability and lack of guaranteed perfect performance of AI models have led to their positioning as co-pilots rather than end-to-end task performers. A few use cases aside (e.g., coding), we don’t see such end-to-end task performers coming to market anytime soon. This means that successful deployment depends on adoption by a customer’s workforce. Naturally, this makes the ROI of these systems hard to measure and is paired with a sentiment shift that the productivity increases associated with those systems might have been overhyped. The fact that an intangible ROI is running against a very tangible cost of inference for developers does not make this any easier.

  • In a co-pilot world, breaking out becomes even harder for emerging companies. They have a structural disadvantage over established companies that can easily develop and distribute a co-pilot to their existing customers as part of their platforms. This is especially tragic for emerging companies because they require feedback loops and data to improve their algorithms. Without this, they inevitably fall behind the competition in terms of technical capabilities.

  • Pricing models that work in the context of software (e.g., seat-based) don't work in the context of AI, as the focus is often on productivity gains (i.e., getting more done with fewer seats). Therefore, there is a need for value-based pricing.

As a result, we see an interesting opportunity in the context of end-to-end AI service providers. These companies focus on one specific job and guarantee the successful execution and delivery of that job. The way these businesses will look internally is that they will utilize AI as much as possible but have a high-quality domain expert who can jump in if there are issues to ensure successful job delivery. Over time, these companies accumulate substantial proprietary data from “end-to-end” feedback loops of delivering jobs. This holistic approach puts these companies in a unique position to develop best-in-class models for a specific use case, leading to increased automation. In the long-term, the COGS of these businesses will converge toward the cost of computing.

In a lot of industries, professionals are either already using extremely sticky software or the software as it is offered doesn’t fit in the specific workflows (it is reasonable to assume that capitalism has led to every software niche being explored in the past 15 or so years). As mentioned above, many of the companies that have successfully acquired users at scale are already planning to roll out co-pilots as features of their products. For an AI-native company to build out a software suite and spend substantial amounts on customer acquisition is likely not the best use of resources.

This is circumvented by offering the delivery of an entire job and delivering results that are compatible with the legacy software customers might be using. Over time, these companies might decide to build out or buy software platforms on top of their AI-enabled service, interlocking themselves with their customer's processes and generating meaningful lock-in (Nvidia software stack, Marvell IP portfolio).

In the cases where conditions for AI-enabled service to emerge exist (see criteria below), we see this as having potentially three effects on market structure:

  1. Consolidation: Some industries may see a consolidation where AI enables a few large players to dominate by integrating and scaling more effectively than before.

  2. Maintained Concentration: In other industries, concentration may remain the same, but new AI-enabled companies could compete head-to-head with existing large players, potentially reaching similar sizes and profitability.

  3. Fragmentation: Certain industries might experience fragmentation, where AI enables many smaller players to operate independently. This could lead to a need for new platforms or aggregators to organize and leverage these fragmented services.

We think the most interesting venture cases will emerge in the context of 1. consolidation and 2. maintained concentration. In the context of 3, it is interesting to explore the next-order effect of this, (see Marketplace for AI-enabled Services and Products)

Independent of the structural market outcomes, the occurrence of AI-enabled service businesses requires specific conditions to emerge and thrive. We differentiate two types of requirements.

First, necessary conditions are always necessary for these businesses to pursue opportunities. Many of these opportunities, however, may become commoditized, leading to numerous profitable but modestly sized businesses, typically in the $10 million revenue range (i.e., fragmentation).

Therefore, for market outcomes 1. and 2. and venture-scale outcomes to occur, opportunities must have the potential for significant competitive advantages, or "moats." These moat conditions are likely present in only a small subset of AI-enabled opportunities.

Our primary objective is to identify and focus on the most promising categories where both the necessary and moat conditions are met. These categories represent the most attractive opportunities for substantial growth and success in the AI-enabled service sector.

Necessary Conditions

  • Objectifiable Outcomes: Objectifiability in outcomes is crucial to 1) training the models and 2) lowering transaction costs with customers. 

  • Easy to Hand Off from/to customer: Easy hand-off is critical to lower transaction costs and make sure the business can scale without integration work, etc. 

  • Technology Maturity: Technology utilized to deliver services needs to be sufficiently mature, or there needs to be a clear path for technology to be mature. In the beginning, human labor supporting the delivery of services is fine, but there needs to be a clear path to attractive unit economics with close to 90% automation. 

  • Value-based Pricing Possible: The thesis depends on a company's participation in the upside of what is 1) generated at a lower cost or 2) made possible due to a new service. It is critical for the economics that the service provider can sufficiently capture the value generated to ensure top-notch margins that are improving as the technology matures. 

Moat Conditions (at least one or two of them need to be true)

  • Enabling Technology Moats: Using off-the-shelf LLMs will not lead to sustained technology moats. Technology moats will emerge where the quality of the service offered relies on developing and integrating with other types of ML / AI, which will lead to some initial barriers to entry from a technical perspective.

  • Improvements in Technology from Feedback Loops: Building on the previous point, ​​another source of possible moat is that technology improves through feedback loops, as services are delivered to customers and lessons are learned. This means that the market leader can improve its technology fastest, leading to winner-takes-most situations. 

  • Sustained Unique Customer Access: Efficient customer acquisition for a lot of these businesses will be key to delivering top-notch profitability margins in the long term. Those categories where certain types of companies have unique access to customers (e.g., due to industry pedigree) are likely to be attractive. Especially when paired with the technology feedback loop condition outlined. 

  • Liability Underwriting: The assumption is that many of these service businesses will have to take liability for correctly delivering these services. Suppose liability is a crucial aspect of the offering. In that case, the amount of risk that can be taken in this context is a function of the cash a company has on its balance sheet to serve as a buffer for potential failures and can, therefore, be more aggressive. 

  • Regulatory Moat: Benefit from licensure requirements and other regulatory hurdles, which act as a natural barrier to entry and provide a stamp of trust and credibility. However, it is unclear whether this is actually the case. Lawyers require licenses, but barriers to entry are fairly low and based on academic performance. If the underlying models are similar, won’t everybody or nobody get approved? 

  • Brand / Trust: Services businesses inherently are built on a high degree of trust and brand recognition. This trust and brand enable customers to know that somebody can be held accountable and their choice isn't questioned by their bosses or investors (e.g., hiring McK, BCG, or top 4). It is likely that the same dynamics play out here and that this can be a source of sustained competitive advantage.

July

Thesis

The "Stranded Asset Exploration Company"

Building the KoBold of Remediation

May 2025

Environmental remediation is a large, lucrative market defined by fragmentation and dominated by a service-heavy model (see appendix). To date, the market, which by some estimates delivers $20b+ in annual revenue, has largely resisted productization and technologically-forward approaches. 

We believe change here is inevitable and that a combination of technological momentum and novel business and financing models is creating substantial company-building opportunities for new players.

Today, there are over 450,000 brownfield sites in the US and nearly 3 million suspected contaminated sites in Europe. As a result, over $3 trillion in real estate value is essentially stranded. Traditional cleanup approaches are slow and costly, with the average federal cleanup taking 12-15 years, thanks to substantial regulatory overhead, labor constraints, and limited automation. 

Having had a front row seat to the way KoBold has grown, we see potential for a similarly data-driven business model that i) identifies high-value contaminated sites, ii) applies superior remediation technologies (via advanced chemistry or otherwise), and iii) captures the value differential between pre- and post-remediation states through, for example, real estate development or the redevelopment or repurposing of stranded assets (energy, resources, industrial equipment).

The why now is backed up by technological convergence that is helping to change the economics of brownfield transformation. Advanced sensing can precisely map contamination without extensive manual sampling, while innovative treatment methods can destroy persistent pollutants in days rather than years. Industrial robotics is increasingly becoming viable, which is particularly important in a market where labor constraints are substantial and worsening (thanks to an aging workforce and little infrastructure for backfilling the gap).

We imagine the ideal business model and capital stack will resemble KoBold’s hybrid approach: venture capital for technology development paired with infrastructure-like financing for asset acquisition and development. This bridges fast-moving tech-enabled discovery with slower land and asset development cycles while allowing value capture from both services and asset appreciation.

We see remediation as a wedge to a broader opportunity, which we have referred to in the past as a “stranded asset exploration company”. Companies will likely start by focusing narrowly on specific geographies, pollutants, or site types (mines, agriculture, waterfronts, etc.) before expanding.

PFAS in dirt or water, for example, represents perhaps the most immediately addressable and culturally salient initial opportunity areas. However, there are a wide variety of potential starting points. 

  • Urban brownfields in high-demand real estate markets

  • PFAS-contaminated military bases and airports

  • Former manufacturing facilities with prime locations

  • Mine tailings containing recoverable metals

  • Waterfront properties with sediment contamination

  • Agricultural land impacted by contaminants near expanding suburbs

  • Contaminated aquifers in water-stressed regions

Companies like Valinor and biotech precursors like BridgeBio present another structural model to consider for such a company. Our understanding of how the “go to market” model works in remediation, particularly around remediation of government-owned and monitored sites, is that it is highly relationship-based, even more so than other government procurement processes.

Thus, if a company can centralize those relationships and the playbook for winning remediation mandates, they can scale that across a range of vertical focuses that can be staffed and financed in ways that are right-sized for a given opportunity. There are also centralizable technical capabilities that can be developed, from advanced chemistry to robotics.

There is substantial policy uncertainty surrounding the opportunity. The EPA and the environmental market overall are hot-button political issues where resourcing could be shifted or shut down with little warning. However, we think the amount of trapped value in the form of real estate, industrial equipment, and resources (minerals, energy, etc.) that can be unlocked through better data and automation will make this opportunity resilient against policy shifts. 

July

Thesis

Scientific Knowledge Acceleration

Building the 21st Century "Max Planck Club"

April 2025

Infrastructure Tech

One way to think about the slowdown in scientific discovery is that it is operating at the wrong level of resolution. Our current research ecosystem – constrained by peer review, “publish or perish” productivity incentives, and hyper-specialization – has systematically dismantled the scientific freedom that enabled the researchers behind the 20th century's most transformative discoveries – what Donald Braben termed the “Max Planck Club”.

This creates massive opportunity costs to society and signals a significant company-building opportunity. 

We see two emerging pathways to recalibrate this resolution:

  1. Individual Amplification: Tools that enable exceptional researchers to transcend institutional constraints and operate like 1-person research institutes. AI-augmented scientific workflows could dramatically expand what a single scientist can accomplish across literature review, experiment design, and analysis

  2. Mass Collaboration: Platforms that enable unprecedented collective intelligence by breaking down information silos between disciplines and creating new coordination mechanisms for distributed discovery.

We are most interested in pathways focused on one or more key levers below:

Funding & Incentive Redesign: Models that reward researcher autonomy, risk-taking, and open collaboration rather than incremental, pre-specified outcomes. This includes both traditional capital allocation mechanisms and novel approaches like prediction markets for scientific outcomes.

Human-AI Scientific Workflows: Tools that seamlessly combine what AI is good at (e.g., literature review, document creation, etc.) with human scientific intuition (e.g., better questions), 

Proprietary Data Assets: Platforms that generate unique, structured scientific data through novel experimental methods or researcher collaboration.

While many of the hybrid non-profit/for-profit research institutions that emerged in 2021-22 (the various attempts at building OpenAI for Chemistry or Biology) have failed to gain traction, we remain interested in thoughtful full-stack approaches, particularly given the capital availability that now exists again such “compound” company building models. The most compelling version might integrate all these elements—new funding models, workflow tools, and collaboration platforms—into coherent systems that reshape how science is produced and validated.

While we are already invested in Sakana and recognize that many solutions may fit well within the attention radius (if not the current technical blast radius) of large AI labs, given the philosophical proximity of science and general intelligence, we also believe the why now argument is compelling as trust in academic insutitutions is at an all time low and the geopolitical necessity of scientific breakthroughs at a generational high-point.


July

Thesis

AI-Driven Industrial Design

The rapid advancement of AI in design is paving the way for entirely new workflows in industrial design and CAD

April 2025

The evolution of Computer-Aided Design (CAD) and rendering tools has significantly transformed design and manufacturing since the 1960s, transitioning from basic 2D drafting to advanced 3D modeling and simulation. Despite these advancements, slow design processes and rendering times remain critical challenges, leading to productivity losses, missed deadlines, and increased costs. It is trite to say that accelerating the pace at which we can design these systems is directly related to the pace at which societies can tackle critical problems.

AI can play a critical role in the product conceptualization, design, and optimization process. In this context, we’ve been actively exploring AI-driven CAE. However, we are equally excited about the role that generative AI can play in the earlier stages of the product design process. 

The rapid advancement of AI in design is paving the way for entirely new workflows in industrial design and CAD. These AI-driven workflows will fundamentally reshape the early design process by enabling ultra-fast iteration, real-time visualization, and seamless exploration of concepts. By drastically reducing the time required to generate initial renders, AI empowers designers to experiment more, refine their ideas faster, and create touchable, tangible prototypes much earlier in the design cycle. The ability to move rapidly from idea to visualization represents a fundamental shift in how products are conceived, opening up new opportunities for creativity and innovation.

However, while AI-assisted sketching and rendering significantly accelerate the concept ideation phase, the real challenge—and likely the biggest value creation opportunity—lies in automating the CAD modeling process. AI-driven CAD tools are emerging to bridge this gap, translating initial sketches and AI-generated renders directly into manufacturing-ready 3D models. 

We have two hypotheses on how this market might play out:

  1. The hypothesis is that nailing the AI-powered CAD generation process first is the optimal entry point into this new design paradigm. Once the AI can accurately generate parametric, production-ready CAD files, the natural expansion will be into AI-driven visualization and early-stage exploration—moving in the reverse direction from CAD to concept rendering, rather than the other way around. 

A key distinction in this evolution will be the complexity of the designed components. There will likely be different workflows for simpler, standalone parts (which AI can quickly generate and refine) versus larger, more complex systems that require deep engineering validation and assembly considerations. While AI may rapidly accelerate the early stages of visualization, the CAD side of the equation remains more technically demanding, making it the logical “wedge” to solve first.

  1. Depending on the complexity of a given product, AI-powered CAD may initially be limited to simpler parts, where automation can handle straightforward geometries, constraints, and manufacturability checks. In contrast, more complex assemblies and intricate systems (such as a car, aircraft, or industrial machinery) will likely continue requiring traditional CAD workflows with human oversight, at least in the near to mid-term. 

Depending on how long this takes, we may see the emergence of two parallel AI-driven tool categories:

  • AI-CAD for Simple Parts – Fully automated or highly AI-assisted CAD tools that quickly generate production-ready, 3D-printable components with minimal human intervention. (e.g., Backflip)

  • AI-driven design exploration for complex systems – AI-driven workflows that assist in iterative visualization, ideation, and optimization but still require manual refinement in traditional CAD tools for complex, interconnected systems. (e.g., Vizcom)

While the long-term goal is likely a seamless AI-driven pipeline from concept to production, the mid-term reality may be a split in AI tooling—one focused on streamlining simple part generation and another enhancing complex system design and iteration without replacing the full CAD process. The key question in this scenario is how long it will take for AI-CAD to handle complex systems.

Note: AI-powered CAD for industrial design must be distinguished from AI-generated 3D assets, which focus on visually appealing but non-engineered models for gaming and AR/VR. Unlike simple AI 3D generators, AI-CAD tools must produce parametric, editable, and manufacturing-ready models with real-world constraints, enabling seamless integration into production workflows.

July

Thesis

E(X)↑ C↓

On the emergence of "markets in everything"

April 2025

Digital Transformation

We are entering an era where the notion of “markets in everything” – long an economist blogger trope – is happening at never-before-seen speed and scale. Prediction markets are one obvious instantiation of this trend. 

But the opportunity is more expansive than that. Massive amounts of (real-world) data coupled with rapidly improving processing and sensemaking capabilities create the foundation for understanding and pricing previously opaque risks, as we have written about in multiple memos: new standards, insurance products, or ways to price externalities. We expect this trend to accelerate as AI compresses the time between cause and detection. Information asymmetries will close faster, enabling players to identify and act on risks sooner. We especially see an opportunity in areas where AI does not only help detect these information advantages (unless there is unique access to data, these are only temporary), but can help act on those in much more efficient and effective ways, shifting the expected ROI of specific opportunities.

One example we are excited about in this context is detecting fraud and corporate misconduct related to harmful products. Instead of waiting years for harmful products to be flagged by regulators, platforms can mine medical and consumer data to spot dangers early. Combining these insights, AI-enabled reduction in litigation costs, and increases in litigation success rate (e.g., as a result of early issue detection and case abandonment), the ROI associated with contingency fees can become increasingly attractive. The key question for such a business is how it can establish itself as the primary consumer platform for litigation that enables it to maximize its volume and acquire proprietary data at scale.

Another example is that this could give rise to a new class of distressed debt investors. AI streamlines distressed investing by quickly analyzing legal documents and covenants to pinpoint optimal positions in the capital stack while leveraging historical bankruptcy data to forecast restructuring outcomes under different legal regimes. Additionally, AI-driven litigation analytics and scenario modeling empower investors to decide between litigation and out-of-court restructuring to maximize recoveries strategically.

July

Thesis

Foundation Models for Structured Data

Unlocking the "action layer" across physical industries

March 2025

Industry Transformation

The rise of foundation models has transformed how we handle unstructured data like text and images. Yet many organizations still struggle to extract value from their structured data (tables, databases, time series). A study by MIT and Databricks found that only 13% of large enterprises are driving material business value from data and machine learning strategies – a problem unlikely to be eased as data volume grows exponentially, regulations expand, and talent shortages persist. 

Every industry runs on structured data. Faster, more accurate analytics directly impact operational decisions and bottom-line results, and we believe companies laser-focused on delivering this improvement have the potential to create and capture substantial value.

The current landscape around structured data is – and always has been – defined by bottlenecks: data integration across disparate systems, manual feature engineering requiring domain expertise, model selection involving extensive trial-and-error, and complex workflow integration. Foundation models address these challenges by learning from vast datasets to automate these processes end-to-end.

Recent breakthroughs like TabPFN demonstrate that models trained on millions of synthetic datasets can achieve zero-shot predictions on new tables, consistently outperforming traditional methods. These models eliminate the need to train from scratch for each new problem, creating a paradigm shift in how organizations approach analytics. The combination of these algorithmic advances with multimodal capabilities, evolving business models focused on capturing outcome-based value, and persistent talent constraints is accelerating adoption.

Despite the generalizability of many emerging approaches, we believe the complexity of the workflows required to make use of the data and insight necessitates a sharp focus on building industry-focused solutions (with, of course, a strong vision for sequencing into further segments). 

Better availability and assessment of structured data is, in fact, a core element of our “action layer” thesis across critical industries. 

  • Insurance underwriting benefits from automated risk scoring, incorporating structured profiles alongside multimodal data like claim photos.

  • Life sciences, particularly clinical trials, can leverage synthetic control groups from historical trial data, reducing required patient numbers and accelerating drug development timelines.

  • Supply chain operations leverage time-series foundation models for demand forecasting, recognizing patterns across products and locations by transferring knowledge from similar historical patterns.

  • Manufacturing facilities deploy models trained on sensor data from thousands of machines to predict failures before they occur—even for equipment that has never failed before—enabling proactive maintenance and reducing costly downtime.

  • Financial services models can process years of transaction data to identify patterns across thousands of variables, enabling faster, more accurate risk assessment and fraud detection.

Across the model landscape (for structured and unstructured data), we continue to be open to different arguments on where sustainable value will accrue in this ecosystem – i.e., where does the proverbial LLM blast radius end? 

While specialized foundation models for structured data have clear advantages over general-purpose models today, the pace of competition makes a sharper focus on structured data inevitable. Thus, durable advantages may ultimately lie not in the models themselves, but in the surrounding elements of the stack – proprietary data access, workflow integration, and physical world action layers – the mechanisms that translate insight and predictions into real-world operational outcomes.

July

World View

Hardware Development Tooling

Enabling the physical technology supercycle

February 2025

Infrastructure Tech

Our ongoing exploration of the hardware development stack, from AI-driven CAE to PCB automation, has consistently pointed us toward a fundamental challenge: the immense complexity of coordinating diverse tools, stakeholders, and workflows across the hardware development lifecycle. While individual design tools have evolved, the job of orchestrating these pieces – managing requirements, test data, manufacturing handoffs, and team collaboration – remains a major bottleneck.

As Western economies pour unprecedented capital into hardware innovation across aerospace, energy, and defense, an entirely new class of hardware companies is emerging. And they are building with fundamentally different expectations around tooling and development speed. The incumbent hardware solution stack fails to meet these heightened expectations – it is fragmented across systems, heavily manual, and lacks real-time visibility. 

As a result, we have seen many emerging hardware companies rolling their own solutions to solve internal and external orchestration across the development lifecycle. Stoke Space’s Fusion, an internal tool that they externalized, is one such effort. This trend, which we have seen inside of several other companies, signals both the severity of existing tooling gaps and validates demand for better solutions.

As such, we see a Stripe-like opportunity to enable and capture a portion of the value created by this new class of companies through the type of critical, but boring, infrastructure that we have deemed “plumbers for reindustrialization” in other research.

We see three primary areas of opportunity for new companies at the orchestration layer:

Test Data & Observability: The proliferation of sensors and testing equipment has created data noise that existing tools can't handle effectively. Real-time analysis of test data, coupled with AI for anomaly detection and optimization – DevOps-like telemetry and monitoring – could transform validation processes that historically relied on manual review and tribal knowledge.

Unified Data & Collaboration Hubs (Next-Gen PLM): The shift toward distributed engineering teams and expansive supply chains (e.g. via outsourcing) has exposed the limitations of current tools. Engineers spend a material amount of their time on non-value-added work like converting files, updating documents, or searching for the latest designs. Modern cloud-based hubs that unify product data (requirements, CAD, tests) could dramatically improve productivity.

Manufacturing Orchestration: The gap between design and manufacturing is a major bottleneck. Tools that automate the translation of designs into manufacturing instructions and provide real-time feedback on manufacturability could significantly reduce iteration cycles and costs.

New platforms built specifically for these emerging workflows – distributed by default, data-intensive by design, and automation-ready from the start – are naturally advantaged.

From a go-to-market perspective, focusing on emerging hardware companies helps orchestration companies avoid legacy processes and tooling and instead focus on shaping modern development workflows. These companies are building complex hardware under intense time (and capital) pressure – they need tools that can keep pace. As these tools prove their value to early adopters, they can expand both vertically (serving larger enterprises) and horizontally (connecting more parts of the development process). 

However, this means our thesis relies on this new class of hardware companies being a durable and scalable customer base. If the end game is dozens of sub-scale acquisitions and a select few successes – leaving today’s incumbent hardware OEMs as the primary market – the entrenchment of existing tooling and orchestration companies (from Siemens to Jama to PTC) will be harder to break.

Similar to what we have concluded in our research into AI-driven CAE, success doesn’t require displacing incumbent tools outright. Rather than competing head-on with entrenched CAD/CAE/PLM systems, new platforms can focus on making these tools work better together – becoming the connective tissue that coordinates modern hardware development. Once established as coordination layers, these platforms position themselves to expand their footprint over time.

The PLM and hardware development tooling market can already be measured in the tens of billions, but we believe the truly transformative companies will win by expanding the market and helping hardware companies iterate and build at the speed of software. This acceleration creates a powerful flywheel: faster development cycles enable more products, which drives increased tool usage and data generation, further improving development speed. Just as software development tools expanded their market by enabling faster iteration cycles, we believe the winners in hardware orchestration will grow the market by unlocking new levels of development velocity.

The risks are real – long sales cycles, integration complexity, and regulatory requirements in sectors like aerospace and defense. But we believe the confluence of market demand (driven by reindustrialization), technological convergence, and incumbent blindspots create a unique opportunity for new platforms to emerge.

July

Thesis

AEC Design Tooling

When will we see Figma for the build world?

February 2025

Industry Transformation

Autodesk is a $65B company with 90% gross margins and earnings growth of 10%+ annually over the past decade. It is, in the views of many practitioners in the ecosystem, a monopoly in the worst sense of the word – extractive price increases paired with degrading product quality, closed and proprietary standards that lock in customers, and a lack of feature-level evolution to meet the needs of architects, engineers, designers, and contractors.

But Autodesk is just a symptom of a deeper problem. The entire AEC technology stack has evolved to reinforce silos rather than break them down. Each specialty has its own tools, workflows, and data formats, creating artificial barriers between disciplines that naturally need to collaborate. The result is an industry that remains extremely inefficient – construction productivity has historically grown at under 1% annually despite billions spent on software.

Perhaps counterintuitively because of the stranglehold Autodesk (and other deeply ingrained products) holds, our early hypothesis is that focusing on design is the perfect wedge to transform this massive industry. Every project's design phase naturally brings together architects, engineers, contractors, and owners – each making decisions that cascade through the entire project lifecycle. This, in turn, creates the possibility to develop network effects (the type enjoyed by Autodesk) once at scale.

The question, then, is what creates the wedge for companies to build distribution in the first place and kick off the network effects flywheel – something that has been a challenge for new entrants, as evidenced by the lack of massive venture-backed outcomes to date. We believe several converging technologies are coming together to massively reduce the costs of experimentation, lower the barriers to real-time design collaboration between parties (minimizing the cascading delays that begin at the design phase), and expand the creative canvas of design possibilities.

  • WebGL and cloud-native architectures finally enable true browser-based 3D modeling at scale. Just as Figma used these technologies to make design collaborative, new platforms are rebuilding BIM from first principles for seamless multi-user collaboration

  • Advances in physics-based simulation and generative AI allow instant validation of design decisions - emerging tools can compress structural engineering workflows from weeks to minutes and automatically optimize building systems for performance.

  • New platforms are bridging design and construction by translating BIM models directly into fabrication instructions, creating the potential to significantly reduce MEP installation costs.

We see three approaches emerging to leverage these technologies and begin embedding them into multi-stakeholder workflows:

  1. Next-gen cloud BIM platforms (e.g., Motif, Arcol): Browser-first collaborative design tools – i.e. "Figma for buildings". Here, we believe companies can build momentum through counter positioning – API-first models that prioritize open document and data standards.

  2. AI-powered point solutions (e.g., Genia, Qbiq): Focused tools that dramatically accelerate specific workflows. Genia automates structural engineering analysis and optimization, while Qbiq uses AI to generate space planning options for real estate teams.

  3. Design-to-fabrication platforms (e.g., Stratus): Bridging the gap between design and construction by automatically translating BIM models into fabrication-ready instructions. Stratus has shown particular success in MEP, where better coordination can significantly reduce installation costs.

The path to end-to-end orchestration will follow a clear sequence: Start by connecting architects and engineers through real-time design collaboration. Then extend to contractors, automatically translating designs into construction planning. As the platform becomes the system of record for design and planning decisions, it can naturally expand into procurement, payments, and project financing - using its unique data position to reduce risk and unlock better financial products. Eventually, these platforms could have a shot at orchestrating the entire building lifecycle - from initial concept through operations and maintenance.

Most importantly, these platforms will enable fundamental shifts in business models and incentives. Today's hourly billing and fixed-fee structures actually discourage efficiency - architects and engineers are paid for time and deliverables, not outcomes. But platforms that can measure and validate impact could enable new performance-based pricing models. Early adopters might start with simple metrics like design iteration speed or coordination time saved. Over time, as platforms gather more data across the building lifecycle, they could facilitate true outcome-based contracts where designers and engineers share in the value they create through better, faster, more efficient projects.


July

World View

Agent Authentication

Enabling active agent decision making

January 2025

Infrastructure Tech

As AI agents evolve from passive copilots into active decision-makers — scheduling meetings, replying to emails, pulling sales reports — they inevitably confront a critical bottleneck: authentication. Without secure access to user data and enterprise systems, agents remain limited in scope and incapable of delivering real value. The future of agent utility depends not on marginally smarter models, but on building trust, control, and access into the very fabric of how agents interact with the digital world.

Today, there is no unified solution to this problem. Developers are left stitching together OAuth flows, manually storing API tokens, and hoping their agents don’t inadvertently leak sensitive credentials. Existing identity infrastructure, like Okta or Azure AD, is built for humans — not autonomous actors. Meanwhile, model providers like OpenAI intentionally avoid the liability of handling real-world authentication and access. This has created a deep infrastructural gap: there is no secure, reliable, and developer-friendly layer for agents to act on behalf of users across different tools and services.

The company that solves this will not only unlock the next phase of agent capabilities, but also define a new category in developer infrastructure. It will offer a clean, programmable interface for granting agents time-limited, scoped access to perform tasks securely — whether it’s sending an email, querying a CRM, or posting in a team chat. Consent, identity, permissioning, and auditability will all be handled by design. Prebuilt integrations to common services, combined with a powerful SDK, will abstract away the complexity of working with dozens of authentication flows. Crucially, it will shield agents from direct exposure to tokens or secrets, ensuring compliance and mitigating risk.

We believe that such a company will gain early traction among developers building agents for productivity, customer support, and internal automation. But its long-term power lies in standardizing the very way agents access and act within digital systems. Just as Stripe abstracted payments and Twilio did the same for communication, this platform will abstract action itself. It will be the trusted intermediary between intent and execution, forming the backbone of an emerging agent economy.

In doing so, it won’t just authenticate agents — it will enable them. And that will make it indispensable.

July

Thesis

Geospatial Intelligence

The complexity of understanding and managing our physical world is increasing exponentially.

January 2025

Infrastructure Tech

Why is this important?

The complexity of understanding and managing our physical world is increasing exponentially. Climate change creates both acute (e.g. wildfires) and chronic stress on our (aging) physical infrastructure. Supply chains are becoming more intricate and, in the face of geopolitical tensions and the energy transition, are reconfiguring on a global basis in real time. 

Geospatial intelligence – novel physical world data captured via optical, multi-spectral, hyperspectral, and other advanced sensor systems via satellites, ground stations, and other modalities – represents a critical substrate for building the advanced action layers (e.g. truly comprehensive world models) that will power fundamental industry transformation in areas like mining, energy, agriculture, and defense. 

However, the trajectory of the geospatial intelligence market has largely been a story of massive perceived potential and disappointing results for builders, investors, and customers. While the use cases have been evident for decades, commercial value at scale has been slow to materialize and the net ROI of most earth observation companies has likely been negative. Adoption has been broad, but shallow – few commercial customers spend more than $1m per year on data and value-added services related to geospatial intelligence. Leaders on the upstream data collection part of the value chain (like Airbus and Maxar) still rely on government customers for a majority of their business while companies like Planet Labs still struggle to project commercial demand from quarter to quarter, indicating a lack of urgency to the data and analysis being offered. 

Solving the bottlenecks around geospatial intelligence that have kept deep commercial adoption out of reach – namely expensive data acquisition costs (for high fidelity data), fragmented data accessibility, and a lack of connectivity from data to core enterprise/industrial workflows has substantial implications for economic growth and human flourishing. The World Economic Forum projects that geospatial intelligence, as a platform technology, has the potential to drive more than $700 billion in economic value annually by 2030. A vast majority of this value will be created in critical physical industries – transforming land use, mitigating natural disasters, transforming how we build and maintain infrastructure, reducing greenhouse gasses, and addressing security and safety issues more proactively. 

Why is this interesting? 

We believe these bottlenecks are finally beginning to fall thanks to two converging factors – technological step-changes and the emergence of urgent buyers for the key technological building blocks that will make cheap, precise, and actionable geospatial data possible. 

  • Launch costs have fallen 80-90%, enabling massive sensor deployment. While it took nearly 60 years to put 2,000 satellites in orbit, we launched 3,000 in 2023 alone

  • Next-generation sensors are achieving unprecedented coverage and precision. Today's systems can detect not just the presence of objects but their composition and behavior from hundreds of kilometers away, at sub-meter resolution

  • AI and compute advances have collapsed processing times and made it possible to for non-specialists to make sense of multi-modal data – what took human analysts years now often takes minutes

The demand side pull, while still not fully materialized, is equally as important and developing quickly:

  • Insurance companies – and the entire insurance model – face existential pressure from climate-driven catastrophic losses (and regulatory intervention). Beyond risk assessment capabilities, improved, more transparent/accessible tooling can help to rebuild trust in this important segment of the financial system. 

  • Autonomous systems (and with it shorter decision-making windows) are increasingly factoring into defense and intelligence operations, putting a premium on breaking down the current data silos to develop advantaged (precise and real-time) sensemaking capabilities.

  • As we have observed through KoBold, the energy transition is creating entirely new customer segments (and forcing agility from large incumbents) focused on critical mineral discovery, methane detection, and other resource categories like water or forestry. 

  • Infrastructure operators, utilities, and construction firms are scrambling to maintain the trillions of dollars of assets needed to reindustrialize, electrify, and – more critically – simply keep the modern way of life (e.g. clean water) running. Proposed initiatives like The Stargate Project create another major tailwind for the geospatial intelligence market. Above are just the handful of use cases we have been most exposed to through our investments and research. Like most great platform technologies, though, we believe many of the most valuable applications will be emergent. Thus, as we look at investments in the category, we are most interested in companies positioned to surf rather than compete with the dual blast radii of LLMs and Space Launch.

Which areas are most investible? 

Sensor Advantage / Infrastructure → While much of the sensor stack is being commoditized, competition at the powerful world model level (e.g. Niantic’s LGM) will drive demand for truly differentiated imaging and sensor suites. High precision, platform agnostic, high bandwidth, and real-time hyperspectral imaging stand out.

Data Fusion → As launch (and other sub-orbital geospatial sensor deployment) grows exponentially, data generation will scale along with it. If the status quo holds, silos and the need for bespoke solutions will only worsen. There is a Snowflake-scale opportunity to build data warehousing and piping for multi-modal geospatial data.

Geospatial Data as an Industry Transformation Wedge → Similar to Gecko in robotics, we believe the most valuable geospatial companies won’t be thought of as geospatial companies when all is said and done. Instead, we see major opportunities to use geospatial data as a wedge to build the workflows and intelligence engines that transform physical industries.

July

Thesis

Industrial Energy Efficiency

Energy demand is rising for the first time in over a decade thanks to rapid electrification, reshoring of manufacturing, and perhaps most notably, AI.

January 2025

Industry Transformation

Energy demand is rising for the first time in over a decade thanks to rapid electrification, reshoring of manufacturing, and perhaps most notably, AI. This demand is being driven top-down via policymakers and bottom-up from the private sector. Regulations like the IRA and CHIPs Act have driven significant growth in new manufacturing construction. Meanwhile, energy constraints have overtaken GPU availability as the core bottleneck to scaling AI for companies like Microsoft, Google, Meta, and OpenAI. 

The willingness of big tech companies to spend whatever is necessary to access energy in the name of AI has led to amusing re-estimations of future data center energy demand every few months.

“[Our expectation of] 83GW is up from ~56GW from the prior September 2023 modeling. Overall McKinsey now forecasts US data center energy consumption in terawatt hours (TWh), rising to 606TWh in 2030, representing 12% of total US power demand. Critically, this is up from ~400TWh in the September 2023 modeling refresh. This is relative to 147TWh in 2023 and 4% of overall US power demand.”

Meeting this energy demand, whether in service climate objectives or geopolitical, energy, and technological sovereignty priorities, is of existential concern to economies around the world. As the saying goes, there is no such thing as an energy-poor rich country. Europe, in a trend that has continued since Russia invaded Ukraine, continues to struggle to meet the energy-related needs of its industrial champions. This has pushed them in droves to the US and other geographies, putting the continent’s competitiveness, productivity, and growth at risk. 

Energy abundance generally and response to data center demand specifically hinges on three important pillars: new power production, better transmission and distribution, and more efficient utilization.

As highlighted in other research, owning and operating physical assets can provide companies with a tremendous moat and allow them to capture more of the value they create. For this reason, companies focused on new power generation or physical infrastructure related to better transmission and distribution are interesting. However, such opportunities are often held back by factors like permitting that are outside their immediate control. 

Efficiency, on the other hand, is a problem best addressed by software and AI. This is particularly true for commercial and industrial buildings, which account for ~ 20% of final energy use (and rising thanks to the growth factors highlighted above). In some places, like Ireland, data center use alone promises to consume nearly one-third of grid capacity in the near future. As energy costs become a more substantial profitability factor and increased consumption puts pressure on sustainability objectives, better solutions for commercial and industrial energy efficiency represent one of the biggest opportunities of the next several decades.

Many of these operations have concrete optimization functions with goals and constraints. However, in many cases, the degrees of complexity of the world are too large for humans to grasp. Therefore, we fail to set up appropriate optimization functions and systems around them, leading to systems far from their global optima. That’s where we see massive opportunities for reinforcement learning. Advanced RL has enabled us to address areas previously unfeasible to optimize for due to their levels of complexity. 

Managing the energy usage of highly energy-intensive operations (e.g., data centers, cooling facilities, etc.) fits these criteria. RL models are capable of driving significant performance improvements autonomously, saving substantial energy and cost. The team behind Phaidra, one company that applies these models, was started by a team of Google employees who deployed these methodologies at Google data centers and saw up to 40% energy savings. They recently announced that they could drive energy savings of 16% at Pfizer’s data centers. Meta has published similar efforts. 

One of the key questions is whether there is enough quality data from sensors to support these plans and whether there is enough digitization of the physical world (and of its controls) for models to drive actions in the physical world. It is likely reasonable to assume that digitization has penetrated well enough for us to have a reasonable granularity and actionability, but the assumption is that the more data and the more actionability, the better. 

This field sits at the intersection of two areas that are core to our broader AI theses: 

  1. Massive economic will happen in the physical world.

  2. The most interesting AI use cases are in areas where AI helps us develop an understanding of the world where complexity is so high that we previously could not. In previous writing, we have referred to these as “expansion” use cases. 

Moreover, similar to KoBold, we expect that building a company in this space will require hiring world-class people across various fields: 1) AI/ML, 2) Software, and 3) Optimization of niche systems. We believe that for companies, being able to build a company that combines these three talent sources will build up substantial talent moats.

July

Thesis

Spontaneous Software

As LLMs can create software cheaply and agents become skilled at connecting user experiences in novel ways, companies are starting to push ideas around self-assembling/spontaneous software.

January 2025

Fundamental Consumer

As LLMs can create software cheaply and agents become skilled at connecting user experiences in novel ways, companies are starting to push ideas around self-assembling/spontaneous software. We believe that, enabled by LLMs, a new paradigm could be on the horizon that increasingly merges the creation and consumption of software and makes a longstanding vision a reality.

We have previously written about this in the context of business software (see here), but we see an equally interesting opportunity in pro/consumer software and applications. It is important to stress that this is an incredibly nascent area with more questions than answers. 

The few questions we have are: 

  1. Where does this happen? For these experiences to feel genuinely magical and software to feel spontaneous, LLMs must have the maximum context of a user's digital expertise, data, and usage patterns across applications. The most likely place for this to live is within a user’s operating system. Assuming operating systems are too slow to adopt, apps will likely emerge. However, it is unclear how long their staying power will be and how useful these will be if the tools and experience they create/enable are removed and not interconnected with default operating systems. In that case, the default place where these things live could be the interfaces of the large LLM providers. Claude has taken steps in that direction. 

  2. How do these systems' privacy mechanisms work? As described above, they require a lot of context to feel magical. The question is how this context is handled privately. Some approaches mitigate risk, such as private cloud enclaves, but there could be a world where these kinds of applications only start taking off when models have 1) memory and 2) can run on consumer devices (e.g., phones and PCs).

  3. What do monetization and business models look like here? It is unclear how much users will pay for custom software tools, especially if this requires work/creating tools. Only 30% of Android users customize their OS, and the current app paradigm has not trained people that they need to pay for utility-type services (this is the result of a combination of tools as a way to lock in as well as ad-supported models). In a world where apps become cheaper to produce and likely more abundant (due to the same dynamics discussed here), it is unclear whether most users will not just use apps that are increasingly available for niche use cases until software becomes completely self-assembling, assuming users every intent ahead of time. 

If we find good answers to these questions, we will be excited about this space and its potential.  

July

World View

Digital Antidote

Touching grass.

January 2025

Fundamental Consumer

As culture becomes more homogenous and consumption more solitary (a conundrum we wrote about in Airspace and Bubbles), consumers increasingly crave ways to identify with 1) physical brands, 2) physical/ephemeral experiences, and 3) their local/smaller communities and their local brands. 

While this can take many shapes, we see the potential to build a significant business around that and keep our eyes open for them. To give a few examples: 

  • Next-generation sport leagues

  • Stong local restaurant brands and emerging subscriptions, events, etc.

  • Increased inflow into mega-churches that offer smaller group gatherings 

  • Local Fashion Brands (e.g., Bandit)

  • Athlete/chef retreats (e.g., Adam Ondra clinic; Mads Mikkelsen Japan Trip) 

  • Running clubs for dating

  • ...

That being said, there are some structural challenges around how scalable these things are and to what extent they are venture cases.


July

Thesis

LLM-enabled Toys (Care Companions)

LLMs are enabling novel embodied AI use cases.

December 2024

Fundamental Consumer

LLMs are enabling novel embodied AI use cases. We expect that it is a high probability that in 5 years, most toys, from stuffed animals to action figures to Barbies, will have some kind of LLM-enabled voice capabilities. We see a few benefits associated with these LLMs: 

Naturally, we believe that data privacy and safety are crucial to these toys being beneficial and successful. Therefore, we believe them to have the following properties: 

We see an interesting opportunity for a commercial player to emerge here. Specifically, we see an opportunity to build an operating system that meets the standards above and enables owners of IP and distribution to build on top. In addition, we see significant opportunities to extend this platform in other areas, such as elderly care.


July

Thesis

Unlocking Tacit Knowledge Through Constellations of Experts

The relationship between individual and organizational performance has historically been governed by management frameworks – from Albert Sloan's GM to Andy Grove's creation of modern OKRs at Intel.

December 2024

The relationship between individual and organizational performance has historically been governed by management frameworks – from Albert Sloan's GM to Andy Grove's creation of modern OKRs at Intel. These systems attempted to solve the challenge of measuring, improving, and scaling human potential across an enterprise. Yet they remained constrained by the limits of human observation and the difficulty of capturing tacit knowledge – the intuitive expertise that defines mastery of a task but has, thus far, mostly resisted codification.

Over the last 20 years, "game tape" and statistical sophistication have revolutionized athletics (and other highly quantifiable professions like enterprise software sales) by enabling precise feedback loops, accountability, and recognition. AI is now driving a similar transformation of the broader professional universe where the relationship between inputs and outputs is often harder to grasp. Professionals have always valued mentorship and coaching. But access has historically been limited by cost and scale (hence “executive” rather than “employee” coaching. AI promises to democratize this type of performance enhancement (and an organization’s ability to measure it) in the same way that companies like Synthesis address Bloom's Two Sigma problem in education. 

Our hypothesis is that “constellations of (AI) experts” – deployed across every facet of professional development and organizational performance – will become as fundamental to career success as mentors and coaches are to elite athletes today. Several converging catalysts are making this possible. 

  • The mass market deployment of co-pilots and proto-agents has rapidly normalized AI-human collaboration. More than 60% of physicians now use LLMs to check drug interactions and support diagnosis – early evidence of adoption for high-leverage decision support. 47% of GenZ employees say ChatGPT gives better career advice than their boss – signaling dissatisfaction among young workers with the status quo.

  • The proliferation of audio/video capture in professional settings generates rich data to help these systems better understand and improve performance. People increasingly operate with the assumption that every call is recorded, while younger employees regularly go viral for sharing layoff videos online. 

  • The economics of AI are reshaping both organizational structures and individual incentives. Companies are shifting from fixed to variable cost models, flexing labor (human and agent) up and down based on demand.  This, in turn, is shifting how workers are measured and compensated. As a result, professionals must proactively adapt to succeed in this new paradigm where human judgment and AI capabilities become increasingly intertwined.

We see several areas where the “constellations of AI experts” will be professionally valuable. In each of these categories, we expect the most successful platforms will combine automated interactions, human experts in the loop, and content/validation that come together to create holistic systems of improvement. 

  • Organization-wide solutions that integrate deeply with company context to provide AI-powered coaching and feedback loops. While employees have shown a willingness to trade privacy for better tools, trust and security guardrails are essential. 

  • Individual-focused platforms that grow with professionals throughout their careers, combining performance enhancement with credential creation in an increasingly fluid labor market. 

  • Solutions for high-turnover industries that capture and distribute best practices to improve training efficiency and retention (e.g. frontline audio-first interfaces)

  • SMB owner enablement systems in areas like the skilled trades and family medicine, to make it possible to i) capture and transmit tacit knowledge (streamlining hiring/training while increasing terminal value) and ii) help operators compete without needing to bring in expensive consultants or PE expertise

These are, to be clear, highly divergent use cases that necessitate different product philosophies, business models, and competencies from the companies building solutions. However, they share important characteristics, namely that they all represent opportunities to use AI and better data to make professional tacit knowledge, action, and context visible and measurable, unlocking precise intervention to help individuals (and by extension teams and companies) grow into their potential. 

July

Thesis

AI-Enabled Asset Ownership

When to sell to incumbents vs. when to compete.

November 2024

Industry Transformation

For companies deploying AI in legacy industries, a key question is whether to enable incumbents by selling them solutions or compete with them by taking a more full-stack approach. The trade-offs between these two models is something we have started to explore through our AI-enabled services analysis and this piece on when to compete with and when to sell to incumbents in an industry.

Recently, several firms have shared public theses on the opportunity for emerging AI companies (or vertical market software companies) to capture additional value in a given value chain by fully integrating via the acquisition of assets – as opposed to selling solutions to incumbents or taking a more organic (build instead of buy) approach to going full stack.

Slow, which helped finance the $1.6b acquisition of parking operator SP+ by software company Metropolis, calls the model “Growth Buyouts”. Equal Ventures, which recently opined on the opportunity for such a model in insurance calls it “tech-enabled consolidation”. Vertical market software investor Tidemark calls the approach “tech-enabled vertical roll-ups”. Re:Build Manufacturing calls its technology-driven manufacturing roll-up model an “American Kieretsu”. 

Our current hypothesis is that while the AI-enabled acquisition of services businesses (with venture dollars) may not be wise, there is a significant opportunity for emerging AI, software, and robotics companies to capture more value and develop value chain control by acquiring legacy assets in physical industries. 

For decades, venture capitalists have been involved in what seems like Sysyphosian tasks: digitizing businesses that operate assets in the real world. For many reasons, from software shortcomings to incentive problems and structural challenges in GTM to a lack of skilled labor on the consumer side. We see a trend for novel ML models to solve the first of these problems by being able to operate assets end-to-end without much human input. Yet the latter challenges remain. Therefore, AI-native companies addressing these problems are prone to leave value on the table and, due to slow adoption, likely are slower at training and developing their models, preceding a lot of additional value. Therefore, AI-enabled asset ownership represents one path to achieve this. 

Sequence matters for companies that go down this path. Companies should prove they can build technology and deliver ROI (for early customers or via a smaller-scale organic full-stack approach) before embarking on buying distribution via M&A. The only cases where early M&A can be attractive are cases where smaller targets that are structurally very similar to large-scale targets in the market can be acquired for amounts smaller than traditional GTM. Initially, these businesses have venture risk profiles, and only after the second or third large acquisition should they be derisked and become predictable/repeatable enough for investors with a lower cost of capital – Infra, PE, etc. – to consider participating. By reaching this point, venture investors will have seen highly attractive returns. 

Initial Hypothesis on Key Conditions

Below is an initial hypothesis for when it makes sense for a company to vertically integrate via acquisition as opposed to doing so organically or remaining a software/technology vendor to a given industry:

  • The company must have a demonstrated “production advantage,”; i.e., a clear technological or product edge that creates compounding value in an industry. Companies leveraging exclusively off-the-shelf technology likely lack the differentiation to deliver venture-scale outcomes even with strong operational execution and financial engineering. If a PE fund working with Accenture can go after an opportunity or if human labor is cheaper on an efficiency-adjusted basis it is unlikely to be a VC case. If solving the problems requires a combination of world-class technologists AND operators, this becomes an interesting opportunity for venture-style risks and outcomes. 

  • Customers have proven structurally unable to adopt and deploy a company’s solution to its most productive extent. Alternatively, they seem unwilling to pay for the full value of it. This can be due to various reasons, from lack of scale to incentives leading to stasis in a given market (“if my competitor doesn’t innovate, I don’t have to”). We should be able to pinpoint a structural issue – and generally point to evidence from a company’s own experience –  with a given market/customer base to be certain the ineffectiveness is not a product issue. 

  • Building on the previous criteria, companies spearheading the buy-out strategy should be building technology that changes the core way an asset is operated, transforming the economics of the business/industry. Most likely that is where existing operators are (somewhat paradoxly) least incentivized to adopt technological disruption. This is what makes the Metropolis acquisition of SP+ such a compelling application of this approach. SP+ has 3,000+ parking locations around the world where the core customer experience (paying for parking) can be largely automated. While the “work around the work” (maintenance, security, etc.) still requires people, the ROI around the primary transaction is much easier to understand than situations where the AI solution is helping people deliver the primary solution more efficiently (e.g. home services models, legal services, etc.). 

  • Likely, there is a sweet spot around the level of complexity that goes into operating an asset that makes it a fit for AI-enabled acquisition. Complexity can either stem from the core value proposition being complex, several offerings being performed at the same asset leading to compounded complexity or the “work around the work” being significant (e.g., for regulatory reasons). Too little complexity at the core value proposition becomes a PE case; too much and the operational overhead reduces the leverage associated with improving the margins of the core asset. Ideally, the complexity/problems across holdings within the same space should be the same (e.g., parking lots), and skills easily transferable. We should be able to pinpoint these levels of complexity and identify assets/problems where they meet the sweet spot. 

  • The category a company is operating needs to have acquisition targets that are operating at scale (ideally businesses worth USD 1B+ with additional value creation in the several hundred million – further analysis on this needed). Buying assets operating at scale that can be fully optimized and automated via AI is substantially more attractive than rolling up locally-delivered services businesses. Again, this is what makes the SP+ acquisition so attractive, SP+ has 3,000+ parking locations around the world that likely are all run very similarly. Ideally, solutions deliver not only cost savings but also growth opportunities. We are also interested in companies with a view on how the integration of software and legacy assets will unlock increasing ecosystem control and turn the business into an industry operating system. 

  • Companies must have advantaged access to talent across functions. It is rare for a founder or founding team to understand “what great looks like” in areas where they have not had direct experience. A team of software engineers is likely unfamiliar with what makes a great industrial CFO or service-business COO. As a result, we may expect the pool of founders well-equipped to build such a business to be small. We have seen this play out at companies like KoBold Metals, which combine highly scientific founding teams with business acumen. 

These criteria still don’t fully answer why/when it is better to grow a full stack solution via acquisition rather than a more organic approach. One primary reason a company would choose to grow via acquisition is if the geographic footprint and surrounding “infrastructure” of an industry will look largely similar in the future as it does today. In such cases, the speed of distribution created by acquisition is enough of an advantage to overcome the accompanying cultural and organizational complexity that could be mitigated with a more organic strategy.

To use the Metropolis example, should we expect the footprint of the parking industry to be more or less the same in 10 years as it is today? While autonomous vehicles may make some impact on the margin during that time period, the inertia of the built environment probably means we should expect the flow of traffic and parking to remain relatively the same (airports, stadiums, commercial centers, etc). 

A counter-example is H2 Green Steel, which has raised multi-$B to produce steel with 95% lower emissions than traditional steelmaking. Because of the fact that the company’s steel production depended on access to ample clean energy, the company couldn’t just acquire and transform underperforming steel facilities despite the similarity in equipment, permitting, and human capital needs. Thus, to transform the industry around their vision, the company was forced to build a more organic approach. 

Companies also might pursue a buy instead of build strategy when the technology can be easily integrated with existing assets and infrastructure, substantially reducing time to value for a given solution. 

There are likely several other criteria in support of (and against) the strategy of vertically integrating via acquisition which need to be explored in further depth. 

July

Thesis

Energy Grid Data

I mean, data is in vogue now, and people
are really kind of a bit obsessed with data and
data companies.

November 2024

Industry Transformation

As we move to more renewable energy production and electrifying our consumption, the problems we must solve to modernize the grid are becoming more complex. This need is further amplified by strong increases in demand associated with increased data centers for AI. High-quality data is a crucial infrastructure to ensure that we understand the electric grid and make the most impactful decisions when operating it and investing in it. We believe there is substantial value in unlocking access to such quality data, from avoiding grid outages due to overload to increasing the ROI on making maintenance and new investment decisions.

At the same time, there are substantial issues associated with accessing quality data on the U.S. power grid: 

Fragmentation
The grid is divided into regional entities, such as the Eastern, Western, and Texas Interconnections, managed by various utility companies, independent system operators (ISOs), and regional transmission organizations (RTOs). 

Lack of Standardization
This fragmentation leads to diverse data sources and inconsistent reporting practices, making compiling comprehensive, high-quality data​ difficult.

Non-centralized energy sources
Additionally, the rise of distributed energy resources (DERs) like solar panels and electric vehicles adds complexity. Data on these resources is often fragmented and incomplete, complicating grid balancing and forecasting efforts​.

Privacy and security
Concerns around this restrict access to detailed grid data, as releasing such information could expose vulnerabilities to potential threats​.

While several initiatives (e.g., NREL, IEA) by various government agencies and NGOs to address the abovementioned challenges have been underway, none have impacted the market where easy open data access has been achieved. 

Therefore, we see a unique opportunity in a dedicated effort to aggregate the various data sources and make them available in a standardized format via API and application. The availability of such data can be the underpinning for a wide array of new applications and use cases that require such data (e.g., applying Reinforcement Learning-based optimization to the grid) and can be substantially improved if such data exists. In short, we see an exciting opportunity for the company that can aggregate and maintain the highest quality grid data to be the nexus of an emerging ecosystem.

July

Thesis

Nature Intelligence

We have been inspired by the field of digital bioacoustics ever since being introduced to this field through Karen Bakker’s work.

November 2024

Infrastructure Tech

We have been inspired by the field of digital bioacoustics ever since being introduced to this field through Karen Bakker’s work. We believe there are a few factors that drive the emergence of this field. For one, sensors are becoming smaller and cheaper while edge processing and memory capabilities increase. The broadened availability of these sensors has led to an increase in domain-specific physical world data – a recurring theme in categories we get excited about – that can be coupled with complementary data sources. Coupled with algorithmic breakthroughs, this data can be used in a host of interesting cases: 

  • Biodiversity monitoring: We believe that biodiversity is a crucial cornerstone of a climate-resilient ecosystem and world. Tracking biodiversity in a cost-effective and accurate way has a clear ROI for a host of different shareholders. Bioacoustics augmented with different data sources seems to be an attractive way to achieve this. We see an opportunity to create an objective standard around this kind of data that can be critical to unlocking the emerging commercial ecosystem.

  • Optionality in collecting novel Nature Data: As we collect more data about our ecosystems, we will see emergent use cases for this data. 

    • We see a world where enough data on ecosystems is collected so that we can predict the trajectory of an ecosystem and take the measures/actions to maintain it. Potentially, this could enable the fast regeneration or creation of novel and healthy ecosystems from scratch.

    • Building more sophisticated bioacoustic models can allow us to develop a more granular understanding of the natural world (e.g., tracking the healthiness of individual plants or animals vs. entire ecosystems), which will drive novel use cases in agriculture and beyond.

    • We have been excited about human-to-animal communication for a while and have been following the work that organizations like the Earth Species Project are doing. While concrete use cases will likely only emerge as we develop these models and understand their capabilities and limitations, proven applications such as navigating bees and deterring elephants from entering farms already show promising signs of impact and ROI.

    • As followers of the Santa Fe Institute, we are convinced that interdisciplinarity in building complex systems is conducive to human advancement. Developing a deeper of nature’s complex ecosystems to inspire our manmade systems in novel ways holds a significant upside. This is the core thesis behind our investment in Sakana AI.

    • We see the potential for bioacoustic data to resonate with consumers. For example, users could listen and interact with ecosystems (e.g., their local forests).

We see an exciting opportunity in an orchestrated commercial effort to bring the research from recent years into the field and deepen our understanding of nature and the positive upside that comes with that.

July

Thesis

AI Movie Workflow Suite

AI video content creation will likely diverge into two paths.

November 2024

Industry Transformation

AI video content creation will likely diverge into two paths: high-quality productions that capture and create wider cultural moments, and lower-quality, personalized content. Consumers are expected to value both types, making tradeoffs between production quality and personalization based on their needs.

High-Quality AI-powered Content – We believe that world-class creative talent is attracted to tools and places that enable them to realize their creative ambitions. Given AI's economics and possibilities in the creative process, it will become an indispensable tool for the best creators. We appreciate that AI models today cannot, on a standalone basis, generate world-class content on par with Hollywood-grade. We believe that the foreseeable future will require holistic tools that enable outstanding creative talent to tell great stories with captivating visuals. Therefore, we see a unique opportunity to marry the capabilities of the most advanced AI models (across relevant layers) with an interoperable software and workflow suite. 

We believe there is substantial economic value and options associated with successfully building out such a suite:

  • An AI-powered suite can wedge its way into a software market that has seen little innovation. As talent makes the availability of such solutions a key choice in who to work with (e.g., studios), most major studios will likely have no choice but to adopt the solutions demanded. If played correctly, such an AI-enabled suite can replace existing tools and, over time, set new standards.  

  • We see opportunities to selectively go end-to-end and enable the build-out of a full-stack AI-enabled movie studio/production company. 

  • We see substantial opportunities to expand into other mediums (e.g., gaming).

Low-Quality AI-powered Content – On the other side of the spectrum is lower-quality, highly personalized, rapidly produced content that can be generated by small creators and, ultimately, by the user (either actively or passively based on preferences). This will not require dedicated workflows with large consumer aggregators of consumer(e.g., Netflix, Meta, YouTube, etc.) but instead will be captured by companies uniquely positioned to democratize easy access to video generation models, automated content aggregation, and distribution.

From a venture perspective, we are especially excited about the opportunity associated with the former but believe there will be large companies built in the latter where emerging companies can identify and engage high-value niches that fall outside the core focus of existing platforms (e.g. sports).

July

World View

Consumer AirSpace and Bubbles

There is a palatable sense that we are in for a major disruption of the way we currently spend our time and money.

October 2024

Fundamental Consumer

Working Thesis
There is a palatable sense that we are in for a major disruption of the way we currently spend our time and money. There are a few underlying trends (some of them might appear at odds with each other):

Consumers are increasingly living and consuming in two spaces that are drifting apart: 

Increasingly homogenous AirSpace
Globalisation and innovations in mass production and marketing gave rise to global consumer brands and the first wave of a globally flattened culture. The internet put this on steroids - the same memes, music, and clothes are available almost instantly everywhere. The experience economy, initially a backlash against this homogenisation, has been commoditised. Uber from the airport to your similarly designed Airbnb, whether in Miami, Mumbai or Marakesh. Scale wins, and to achieve that scale you have to work with social media and search engine algorithms, which tend to surface the most mainstream goods and content (because it is the least risky and most profitable), thereby reinforcing that mainstream for consumers. The same is happening in film, where studios are increasingly focusing on mainstream features. We use the term AirSpace coined by Kyle Chayka for this phenomenon of increasing homogeneity.  

We expect the emergence of generative AI to further reinforce the unification of mainstream content. By definition, these algorithms probabilistically create the type of content they are expected to develop based on their training data. As the cost of creating generative content comes down, this will create massive amounts of predictable content that fits squarely into AirSpace and lacks the unexpected. 

Increasingly Heterogenous Personalized Bubble
At the other end of the spectrum, there is a strong trend towards individualised content consumption. Due to the abundance of on-demand content (e.g. Spotify, Netflix), there is a shift towards consuming content on demand and in a highly personalised way. While there are benefits to this type of content consumption, it also makes the content that each of us consumes predictable, as our individual consumption preferences are understood and reinforced by recommendation algorithms. 

As a result, our shared cultural fabric, which is an important medium through which we connect with each other, is being eroded. For example, in its final season in the late 90s, Seinfeld was consistently the number one show on television, averaging 22 million viewers per episode, who watched the episode simultaneously and discussed it in the office the next day. In 2023, the most watched show was Suits, which premiered in 2011 and had its final season in 2019 - we saw it come up in zero conversations in 2023.

We expect this to increase as AI-generated content becomes increasingly viable. We see a not-too-distant future where content across all media and potentially all levels of quality is created for an audience of N of 1, highly tailored to each individual's preferences. 


What we believe to be true about the human psychology and sociology
People like trends and the comfort they bring. So AirSpace is not bad and will continue to exist. However, there is likely to be little room for innovation; large aggregators exist (e.g. Meta, Google, Airbnb) and will continue to monetise this in the best possible way.

Humans like to consume the content they enjoy, and that reinforces their bubble. The more personal, the better. Hence, the Personalized Bubble is not bad. We expect this to get much weirder from here as application developers and consumers lean into AI-powered use cases. Character AI was chasing this opportunity, but a team of former Google researchers was unlikely to embrace the weirdness. 

People like to consume authentic, unique things. However, much online content lacks authenticity/quality/care and is predictable. Gen AI is the straw that breaks the camel's back as the cost of content creation trends towards zero (or the cost of computing). 

As a result, there has been a noticeable shift in how large parts of our digital lives are moving either to group chats (which can act as a curation layer for the noise) or back to IRL in the case of dating (e.g. running clubs in NY or supermarkets in Spain). We also see this shift playing out beyond content and relationships. We believe that people have an innate desire to consume goods that others have put care into and that are unique. As this type of content becomes less present/prominent online (e.g., due to Gen AI), we expect to see a big shift towards people consuming physical goods and experiences that have this artisanal aspect, are unique or ephemeral, such as pottery, handmade clothing, leather goods, live concerts, etc. This is great for brands like Hermes, which have kept craft at the heart of their leather business. It's also great for live performing artists (and their ecosystem), local artisans, etc. 

Humans crave shared cultural experiences. Unexpected and rooted in whatever shared cultural fabric is left, these experiences must shatter the confirmatory AirSpace and transcend our personalized Bubbles. Achieving this in a repeatable fashion requires a deep understanding of the Zeitgeist and the ability to turn this inside out in unexpected ways that deeply resonate with a society's (or sub-groups) shared cultural fabric. 

Opportunity Areas
Substantial innovation will occur in the context of AI-enabled personalized experiences. We are excited about this opportunity and are looking for companies exploring envelope-pushing form factors and ideas that are borderline fringe today.

As the Airspace and the Bubbles continue drifting apart and becoming more homogeneous on the one hand and heterogeneous on the other end, there will be substantial value in creating these types of experiences in a repeatable fashion. Studios like MSCHF and A24 have done a great job of this.

July

Thesis

Intelligence-Enabled Marketplace

We see an exciting opportunity for AI-enabled marketplaces to emerge.

October 2024

Infrastructure Tech

Working Thesis

We see an exciting opportunity for AI-enabled marketplaces to emerge. While there are a lot of opportunities for AI to enhance marketplaces (see good NfX write-up here), we are especially interested in situations where AI-enabled processes are in a reinforcing interplay with data advantages that lead to a sustained higher value proposition (i.e., better matching) in the marketplace (see graph below).

As outlined above, there are two interconnected feedback loops at play: 

  1. Using LLMs and VLMs to collect the right proprietary data at scale (i.e., conduct interviews, ingest large documents, understand client requirements using document upload, etc.).

  2. Use fine-tuned LLMs/VLMs + other ML models to understand demand and supply better, identify actions that reduce uncertainty around matching probability (e.g., follow-up questions) and to carry these actions in service of enabling more cost-effective/higher-value matching.  

We expect that businesses creating sustained value in this space to meet the following criteria:

  1. LLMs, VLMs, and other models can perform tasks to an acceptable degree (i.e., they meet a bare minimum threshold) – both on the proprietary data collection and matching side.

  2. Large amounts of unstructured data and feedback loops are useful for fine-tuning models that directly unlock economic value.

  3. Nobody has collected data relevant for training/finetuning these models at scale as there has been no economic/technological incentive to do so.   

  4. There are ways to create highly frictionless form factors using 1.)  that allow users to interact with these platforms seamlessly and in highly personalized ways to collect large amounts of data. 

  5. Initial data and model advantages can be sustained and turned into lasting moats with little risk of second movers and other market participants (e.g., incumbents with large distribution) being able to catch up. 

We see opportunities in various areas, from HR to Traveling to healthcare provider (e.g., psychologist) matching. Especially in scenarios where a lack of information leads to low matching rates. A few cases:

Case Study 1: Staffing

Staffing is historically incredibly time-consuming, requiring a deep understanding of a candidate’s capabilities and the job requirement assessment. This is very hard to scale as quality assessment usually requires 1) reviewing materials, 2) conducting interviews to dig deeper + review these, and 3) feedback cycles to understand what type of candidates the demand side actually wants (stated vs. revealed preferences). This leads to many staffing marketplaces doing a bad job of vetting demand or being very expensive, with matching rates reflecting this. 

Let’s go through the criteria set up above to see whether a hiring marketplace is a good fit to become intelligent:

  1. LLMs can already review and synthesize vast amounts of unstructured data (e.g., CVs, websites). They are capable of doing the same with job requirements. They are also capable of performing job interviews to a somewhat satisfactory level. 

  2. Models and AI interviews can be finetuned based on desirable outcomes (e.g., matching of demand and supply), thereby adjusting their reviewing and interview capabilities. This can happen even in a customized way (e.g., custom), given that certain parties on the demand side are large enough to guarantee a certain “offtake.” Mercor wrote this on their blog:

  3. This part is not so clear in the context of staffing. For one, there are a plethora of existing and new AI-enabled hiring tools that use AI-supported video (e.g., HireVue), and existing staffing platforms (e.g., Upworks) are rolling out video interviews, too. It is unclear to what extent these platforms might or might not have large amounts of a combination of unstructured data with hiring matches that they can use to train better models. Also, by sheer scale and distribution, these platforms should be able to generate plenty of data easily. 

  4. In the segments of the economy where jobs are sought after, people are eager for the opportunity to be in the talent pool that is considered for specific jobs. In these cases, people are willing to share their data CVs and conduct AI interviews – especially if the process is smooth. Given that the demand side (aka the companies looking to hire from the talent pool) is reasonably attractive, the CAC associated with acquiring the supply and data (i.e., video interviews, CVs, etc.) should be fairly low. 

    As described above, while we don’t assume AI-based matchmaking to be perfect yet, we believe that AI can be used to support already increasingly efficient matching, enabling the development of a cash-flow-generating business model while data is collected and models improve.

  5. Given the dynamics described under 3, it is unclear whether an HR marketplace with an initial data advantage can sustain this advantage. What if existing platforms like Upwork roll out AI-based video interviews and start training their models? With their existing brand and supply, they should be able to generate more data than any startup substantially faster, leading to better models, etc. If not, what is a relevant quantity of data to establish a platform as the winner? Will general LLMs acquire the capabilities of finetuned models as they get better and context windows improve?

July

Thesis

AI-enabled PCB Automation

It is a recurring meta-theme that we think AI will have a great impact on the physical world.

September 2024

Industry Transformation

Working Thesis

It is a recurring meta-theme that we think AI will have a great impact on the physical world. At the same time, we are convinced that companies that innovate around business models and take ownership of certain processes will unlock a lot of value, maximizing the value capture associated with their technology. 

One area that has caught our attention in this context is AI-enabled PCB layouting. Printed Circuit Boards (PCBs) are the backbone of modern electronics, enabling a wide range of devices across various industries. In consumer electronics, PCBs power smartphones and smart home devices, enhancing our daily lives. The medical field relies on PCBs for critical equipment like MRI scanners and pacemakers, improving patient care. Automotive applications include engine control units and advanced driver assistance systems, making vehicles safer and more efficient. In aerospace and defense, PCBs are crucial for avionics and satellite communication. Industrial settings benefit from PCBs in robotics and automation systems, while telecommunications infrastructure depends on them for routers and cell towers. From the devices in our pockets to the satellites orbiting Earth, PCBs play an indispensable role in connecting and powering our technological world. As the complexity of end devices increases, so does the complexity of PCBs. 

The increasing complexity of PCB layouts makes the design more challenging due to higher component density and miniaturization, which require intricate placement strategies and precision routing. Managing multiple layers and implementing high-speed interfaces demand careful signal integrity analysis and tighter manufacturing tolerances. Integrating mixed technologies complicates the design process, requiring effective partitioning and thermal management. These factors necessitate advanced skills and sophisticated tools to ensure that designs meet performance and manufacturability requirements. That said, as shown in the table below (Source: Claude), the processes associated with correctly laying out a PCB already take around  50%+ of the total time of PCB development today. We expect this to increase due to the described complexity of PCBs to keep pace with the novel applications we need them for.  

It is our current assumption that increasing complexity will have a disproportionate impact on the effort and time it takes to create these layouts. Other than schematics, this seems to be a very straightforward task requiring little strategic context. A PCB layout either works or does not based on certain benchmarks, whereas schematics can be more ambiguous. We have seen significant progress in AI model development (especially reinforcement learning), which can automate and significantly accelerate parts of the PCB layout process.

The total number of PCB designers in the United States is 72,971, with an average salary of around 74k per year. This gives us a total salary of USD 5.4B for PCB designers. Automating a significant part (70+%) of their jobs offers considerable cost savings. Of course, this does not include any economic benefits associated with AI models accelerating the process and substantially reducing the number of hours. This is especially valuable at the higher end (e.g., aerospace, defense), where PCBs are highly complex and take orders of magnitude more time to design. Acceleration of parts on the critical path into production is likely precious and hard to quantify based on cost-saving numbers.

We have spent significant time thinking about the opportunities of AI-enabled outsourcing and services business and believe that PCB layouting provides the structural environment for such a model to emerge. 

  1. Objective benchmark assessments 

  2. Clear benefits to assuming responsibility for working output

We believe that the company capable of driving significant improvements here can build a large company with a wedge into a market that is otherwise hard to penetrate for software companies due to the dominance of Altium and others. 

July

World View

European Defense

A new era of strategic autonomy and societal resilience

August 2024

Industry Transformation

With war on its doorstep and American attention shifting to the Pacific, Europe has been forced to prioritize resilience and strategic autonomy for the first time in a century. While Russia’s invasion of Ukraine exposed Europe’s lack of preparedness and home-grown capabilities, the conflict has shifted the perception of European builders, investors, and policymakers on the importance (and ethics) of developing and deploying critical technology to foster sovereignty.

The result has been a groundswell of momentum aimed at transforming Europe’s defense-industrial base; protecting European values by deterring Russian aggression in the near term and building the long-term foundations to project strength amid rising Great Power Conflict. 

In recent years, change has occurred at all levels – from the EIB’s updated views on defense technology and the European Commission’s first-ever Defence Industrial Strategy to the rise of an actual defense technology ecosystem in Europe for the first time, catalyzed by the momentum of lighthouse companies like Helsing, ambitious efforts like the €1b NATO Innovation Fund, and grassroots organizations like the European Defense Investor Network.

But expanding markets, increased capital flows, and narrative momentum don’t always imply attractive forward-looking returns. 

Despite the market’s growth, inertia, fragmentation, and protectionism rule the European defense market. While European defense spending has returned to Cold War levels, the continent still lacks urgency relative to geopolitical allies and rivals. The conflict in Ukraine has done little to unite European perspectives on the what, how, and who of building next-generation defense capabilities. The EU’s two largest economic and military powers – Germany and France – remain fundamentally split on the role of Europe in its own defense. This philosophical gap threatens to worsen the severe physical fragmentation of European defense forces – Europe operates 5x the number of vehicle platforms than the US. At the same time, the UK has increasingly shifted attention away from the continent towards the AUKUS coalition. 

The US defense technology ecosystem, far more developed than Europe, inspires little confidence in what lies ahead. Through September of 2023, venture-backed companies were awarded less than 1% of the $411 billion in Defense Department contracts awarded in the government’s fiscal year – only a slightly larger share than in 2010 when few startups were building military technology. And while companies like Anduril have shown that the path to scale is possible, the company’s success may end up making it the new technology distribution chokepoint instead of a bellwether for a thriving new defense ecosystem. 

These factors present significant obstacles to building and scaling European defense technology companies. They may also present unique opportunities for a highly targeted investment approach in the space, aimed at turning the market’s weaknesses (e.g. fragmentation) into strengths and riding key catalysts that may help emerging companies overcome the inertia and sub-optimal industry structure.

Catalysts Creating Opportunity

To believe in the opportunity to invest in emerging European defense technology companies in the face of the incumbent market structure, we need to see significant technological, economic, social, and policy-related shifts that are, critically, available to emerging entrants and not incumbents.

Europe has been forced to prioritize resilience and strategic autonomy for the first time in a century. This has restructured the continent's procurement regimes, capital markets, and attitudes. The simple availability of early-stage capital for emerging defense and security companies in Europe cannot be overstated. With dozens of early-stage funds now focused exclusively or significantly on the space and later-stage investors slowly showing up, financing a defense technology company in Europe to the point of scale is now possible. As the EIB and other capital markets institutions continue to evolve their view, we expect many of the capital markets barriers to financing defense and security companies across the company life cycle will begin to fall away.  

Procurement remains a significant challenge, but the fragmentation of Europe creates opportunities for emerging companies to get to market faster by targeting smaller, potentially more agile countries more inclined to adopt new solutions. Greece, for example, now spends 4% of GDP to support a tech-forward defense strategy while countries close to the front in Ukraine have been forced to move quickly to adopt new solutions. 

The “primitives” for rapid, capital-efficient defense technology development have come online, making it possible for companies to ride multiple technological tailwinds to build solutions that meet the complex needs of government customers. Decreasing costs of hardware, enabled by advanced manufacturing, better (software) tooling, and the acceleration of physical-world foundation models make it possible for companies to develop complex defense technology systems at a fraction of the cost of incumbents. AI systems are already operating successfully in significant tests (like dogfighting with fighter pilots) and on the battlefield in Ukraine, which should drive more receptiveness from (risk-averse) buyers and users. 

Lighthouse companies and talent ecosystems are emerging around defense and national security-focused technology for the first time in Europe. The US defense technology ecosystem built momentum on the back of breakthrough companies like SpaceX and Palantir. The same pattern is playing out in Europe, with companies like Helsing and The Exploration Company forming the foundation for a thriving defense-industrial ecosystem in Munich. While less developed in terms of defense and space-focused technology, places like Stockholm (energy) nd Paris (AI) have become beacons for talent in areas adjacent to national security. Meanwhile, London has captured much of the early-stage energy and likely represents a strong ecosystem to launch a defense technology company thanks to its physical proximity to Europe and cultural proximity to the US.  

The Ukraine conflict has presented a unique opportunity for companies to develop proof points and revenue, creating a “backdoor” for future contracts with Western European governments. It has also highlighted the future of warfare. Rapid acquisition and deployment processes in Ukraine have helped companies generate real revenue and test systems in live situations. While larger Western European governments have been slower to respond, and more likely to simply drive business to existing primes, the proof points being developed by emerging companies should help their cases in (eventually) accessing larger, longer-term programs. Technologically, the predominance of electronic warfare has given a fundamental advantage to agile companies that can iterate rapidly to stay ahead of Russian competition. 

Key Insights

The following factors are the most significant in driving success for emerging European defense technology companies. These lessons are drawn from the company and our customer/expert interviews.

New defense companies are software-first, R&D-centric, and mission-driven. Incumbent defense contractors operate on a cost-plus business model, essentially building to the specifications laid out by government buyers and layering a “reasonable” margin on top (5% - 15%). As a result, large primes spend less than 3% of their budget on R&D and lack the incentive to innovate. On the other hand, companies like Anduril and Shield AI take on product development risk themselves and spend massively on R&D

And while the hardware these companies build tends to garner the most attention, the software and autonomy systems underlying the hardware make everything work. Anduril’s Lattice platform ties together all of the company’s hardware products, fusing sensor data and creating an autonomous operating picture. This software-defined operating model drives better margin structures (Anduril targets 40% gross margin vs. Lockheed and other primes under 20%), allowing them to continue fueling an R&D flywheel.

Fragmentation remains the most challenging aspect of European defense. It may also present the largest opportunity. Europe’s fragmentation challenge needs little additional explanation. There is not one unified military-industrial complex on the continent, there are 27. Each has a different view on supporting its own national champions, different relationships with EU member countries, and divergent views on buying from outside (usually the US). This has resulted in a complex web of disparate capabilities (weapons systems, vehicle platforms, and communication models) that limit rapid response and collaboration.

Understanding this, and realizing that it is likely beyond the reach of one company to solve from a hardware (let alone cultural) perspective, is key to uncovering where opportunities sit. Helsing, for example, has leveraged its positioning as a multi-domain AI backbone to build early leadership around this concept. As cheap drones, cameras, and other sensors proliferate, the opportunities to coordinate the complex data and operational picture, solving capability and collaboration gaps through more modularity and interoperability become larger.

Technology differentiation is table stakes. The most successful companies will possess a “secret” to navigating procurement. Despite the shroud of complexity surrounding defense procurement, success remains largely “champion-driven”, as Anduril CRO Matt Steckman recently remarked. Companies don’t win through better technology, they win by solving specific problems for people with influence in the buying process. Companies must simultaneously engage in long-term relationship building (including lobbying) to build trust with procurement influencers while developing relevant proof points in the field. One way of doing this, as Anduril demonstrated and emerging European players like Lambda Automata are attempting to replicate, is by viewing defense and security as a “conglomeration of micro markets” – which includes adjacent opportunity areas like public safety and border control.

Narrative momentum is highly rated but likely remains underrated by European founders. The traditional stereotypes of European vs. American founders seem to have played out in the early part of this new defense tech wave – from Anduril’s narrative mastery to the momentum of ecosystems like El Segundo to the sophisticated way some companies and investors have integrated themselves into the Washington decision-making systems. As in all markets, there is a reflexive nature to success in defense – the companies that win figure out how to craft a better story and more social proof to attract capital and talent in advance of fundamental traction. 

Distribution bottlenecks inherent in government and defense contracting are already contributing to market consolidation for emerging defense technology companies. Competing against defense primes means eventually competing in every domain they operate in. As software-first companies break through, the returns to scale and breadth might become even greater – platforms like Anduril’s Lattice get stronger as they consume more data and control more hardware assets in the field. Combined with the defense market’s natural bent towards consolidation, companies that can be “first to distribution” in a given area will be very hard to displace and will be strongly positioned to roll up interesting technology and talent, as Anduril has already started to do aggressively. (The sheer number of Anduril references in this document reflects its outsize and rapidly compounding success in this space!)

Emerging Investment Areas

There are several valuable defense market maps and landscapes worth evaluating to understand different ways of breaking up the market, perhaps the most comprehensive is this from Quiet Capital’s Michael Bloch: National Security and Defense Market.

To avoid rehashing those efforts, our focus has been on identifying emerging themes that span multiple segments of such maps, supported by converging market, technology, and geopolitical tailwinds. While not comprehensive, these themes align well with the catalysts and insights above and are where we have seen several of the most interesting companies in our review – the most interesting companies tend to touch multiple themes. 

Modularity and Interoperability → Leaning into the fragmented nature of European defense through solutions that aim to unite disparate operating systems and coordinate complex environments. While software capabilities will be the core connective tissue, hardware plays a big role as well. Cheaper, smaller, interoperable systems built to be easily adopted (both budget and technology-wise) can help accelerate initial deployment and provide companies with a platform from which to expand. 

Rapid Response → Building a more dynamic defense-industrial base by shortening time and cost to intervention across domains and operating areas. This ranges from faster kinetic capabilities (e.g. hypersonics and electronic warfare) to rapid manufacturing capabilities (e.g. Replicator) to faster deployment of machines and people (e.g. counter UAS swarms, labor coordination platforms) to systems that can be deployed (and as importantly, replaced) quickly. 

Multimodal Physical World Data and Intelligence → Wayve’s recent autonomous driving demonstrations showcased the speed at which multi-modal models are making their way into the physical domain. Along with the rapid decline of hardware costs, models that can reason more dynamically create interesting opportunities in defense, where operating environments are extremely fluid (i.e. not repetitive like pick and place, etc.) and thus pose problems for more rigid AI systems. Better simulation data will also continue to play an important role in preparing autonomous systems for live action. This represents a more horizontal theme and is thus something we might pursue a deeper dive into beyond defense.

Software for Hardware → The declining cost of hardware also creates room for better tooling, both at a collaboration/workflow level (i.e. Atlassian and GitHub for hardware builders) and at a design level (i.e. better CAD/EAD, “Figma for chips”, etc.). Fusion, a software platform developed and externalized by space launch company Stoke, highlights the need for better tooling to serve the hardware revolution. Enhanced IP and data security along with high levels of required precision for certain use cases may create specific opportunities in defense.  

Maritime Production, Security, and Infrastructure → Control over maritime infrastructure represents a significant geopolitical and economic advantage. Over the past decade, China has invested heavily in shipbuilding capacity. Today, a single shipyard in China has more production capacity than the entire US shipbuilding industry. However, the importance of maritime control goes beyond just shipbuilding. Undersea cables, for example, are the backbone of the global financial and communications systems. – over 95% of the world's communications are carried by a network of roughly 500 cables laid across the oceans. These represent critical vulnerabilities that need to be proactively protected through better surveillance, kinetic deterrence, and cybersecurity technologies. 

Combatting Digital Authoritarianism → Control of the digital economy is highly centralized, with cheaper data processing and engagement-centric business models (i.e. advertising) feeding the strength of a small number of powerful companies and institutions. This has led to democratic deterioration and a loss of trust in key institutions. It also creates a more straightforward surface area for attack and manipulation by adversaries – spanning consumer-focused influence campaigns to corporate IP theft. Technology that empowers sovereignty over assets and information, increases privacy, and enhances secure communication and collaboration represents a somewhat orthogonal, and bottom-up, approach to investing in defense and security as the go-to-market model may be dependent on large-scale government procurement. 

July

July

article header

October 2024
Industry Transformation

Confidence: nascent emerging established

Thesis

The Manufacturing Action Layer

The anticipated (and much-needed) manufacturing renaissance in the US and Europe – sparked by rising competition with China and a movement to invest in expanding domestic production capacity in the wake of pandemic-induced supply chain breakdowns is hampered by several deficiencies. Among these limiting factors is the challenge of turning vast amounts of disparate industrial data into actionable insights and execution that drive true productivity gains

As the cost of adding sensors to anything and everything in the manufacturing process has decreased significantly, the amount of data produced in the factory environment has exploded. However, conversations with people across the manufacturing landscape make it clear that the impact of production digitization continues to underperform expectations. 

More than a decade into the Industrial IoT wave, most data from the manufacturing process ends up – at best – brought together into static Excel files. And while platforms like Palantir’s AIP promise rapid transformation, the ground reality is that data from different systems continues to live only in the heads of individual operators – a critical risk in an industry with massive turnover and an aging workforce. The VP of Industry 4.0 at a ~ $5b market cap automotive supplier recently remarked that they still lack the visibility to know whether a machine is even running in a factory without calling someone on the ground.

Incumbent software offerings in manufacturing are often stitched together over years (even decades) of acquisitions and integrations, resulting in a mess of fragmentation technical debt, information silos, and process bottlenecks. 

Given this backdrop – and the macro tailwinds that will continue to drive demand for next-gen manufacturing solutions – our initial hypothesis is that there are two interesting angles of attack for new companies: 

  1. Modern industrial control and execution systems capable of aggregating data across modalities and business systems, automating mission-critical operation and production activities, and assuming responsibility (via outcome-based pricing models) for driving efficiencies.

  2. Software-defined manufacturers aiming to displace incumbent manufacturers entirely through more efficient end-to-end approaches in specific verticals/use cases. 

Both models face challenges. The “base rates” for selling impactful digital solutions to manufacturers are mediocre at best and the companies that have reached scale – platforms like Cognite, Palantir, and Samsara – have significant distribution advantages that must be overcome by more narrow emerging entrants. For the “full stack” players, the scale potential is clear but remains to be seen whether venture capital is the right financing tool (“CNC machines with startup branding” is how one person described one of the companies to us).

July