Like the rest of tech, we’ve been knee-deep in understanding how AI is affecting our infrastructure investments at Headline. The landscape of AI investments is evolving rapidly, with two distinct tracks gaining prominence.
In 2024, our infrastructure software (“infra”) team held around 1,200 founder meetings. Of these, we estimate around 25% to 30% were with founders building AI-centric products. In the same period, we tracked 92 announced Series A deals across the infra landscape, which we’ve charted over time below. We estimate ~40% of these deals were for AI-centric companies, a percentage that quickly trended upward as the year progressed.
We unsurprisingly observe that AI-centric deals are increasingly permeating the infra VC market, accounting for 61% of announced deals in Q4 '24. What’s perhaps less apparent from internet forums and media, though, is that a significant portion of infra Series A rounds are still being done for companies that are not ostensibly AI-centric. This suggests that there continues to be meaningful innovation outside of the direct AI universe.
Here’s our view of the emerging landscape of AI-centric infra companies specifically, along with our key considerations when evaluating opportunities in the space.
Our View of the Landscape
We frame our view of the AI-centric infra universe by considering two major categories of companies, per the graphic shown below:
- “Infra of AI” -
Companies building the new infrastructure stack for AI
(i.e. hardware, compute, models, tooling, and apps) - “AI in Infra” -
Companies using AI to transform the functional areas we cover within infra
(i.e. dev tools, DevOps & SRE, data & analytics, IT ops, cybersecurity, and QA & testing)
Of the AI-native companies that raised Series A rounds in 2024, about 55% of them were characterized as “infra of AI” companies and around 45% as “AI in infra” companies.
Why this distinction? Companies in each category are working toward distinct goals. “Infra of AI” companies are seeking to become the true pick-and-shovel plays of AI, while “AI in infra” companies are seeking to uplevel current organizational practices.
“Infra of AI” Market
There is a new, continuously-evolving stack of infrastructure solutions for generative AI. While there is overlap with the infra stack for predictive ML, there are also novel solutions purpose-built for AI as a result of new, distinct requirements. For example, in predictive ML, enterprise teams were built around the need to create specialized models for specific use cases. In contrast, general-purpose models are sufficient for many use cases in generative AI, shifting the focus away from model development and further up the stack toward building useful, reliable applications on top of models.
Our key questions in the emerging AI infra stack center around 1) where value ultimately accrues and 2) what is ultimately an enduring solution vs. a stopgap solution.
The ~current “infra of AI” stack:
As a generalization, we have observed the most funding and traction to date in the hardware, compute, model, and application layers. The tooling layer remains comparatively nascent – this may be because we are still in early innings for production enterprise AI deployments. Interestingly, the tooling layer contains categories of companies – data, operations, and security – that have produced most of today's publicly traded infrastructure success stories. Our investment in Gentrace highlights our belief that there will ultimately be success stories in these sectors in the AI era as well despite the nascency of the market today.
In deciding where to prioritize within the infra stack (for now), we are keeping these key considerations in mind:
Durability
The infra AI stack is ever-evolving and fickle. As investors who emphasize product-market fit, we typically seek clear evidence that there is user love for a product. In the AI era, however, this alone is not enough – a product or technique that is relevant today may be less relevant tomorrow as innovation introduces other ways to address user needs. For example, we already hear sentiment that finetuning may be relevant for a smaller universe of use cases today vs. what was initially anticipated. We are encouraged by founders who are responsive to changing market conditions, building roadmaps with an eye toward long-lasting value creation.
Competition
Competition in the AI infra stack is multi-faceted. For any given solution, there are typically 1) emerging direct competitors, 2) existing incumbents in the problem space, and 3) larger, well-funded AI platforms which could develop competing solutions. For example, it remains unknown if the problem of agent auth is ultimately solved by a new startup, an incumbent like Okta, or a platform like Anthropic. “Who is best positioned to solve this?” becomes the key question. For areas where there are parallel existing incumbents, such as ops, data, and security, are the requirements of AI distinct enough to require a new, AI-native solution? Think of using Postgres and pg-vector instead of vector DBs, for example. For any area, can a killer feature from OpenAI, Anthropic, Perplexity, etc. disrupt the entire landscape?
Traction
As investors, product-market fit is a guiding philosophy. Many infrastructure products take a bottom-up, developer-led motion initially – this is especially prevalent in the AI infra market today where enterprise production use cases are still early. However, not every product that has “problem-market fit” with developers and hobbyists will be able to successfully monetize, and even the products that can monetize prosumers and early-stage startups may struggle to capture the enterprise market. Product-market fit among developers ≠ product-market fit among enterprises. Given enterprise is a major driver for most scaled infrastructure businesses, we seek proofpoints that a product has initial adoption, usage, and willingness to pay among early enterprise customers who share similar problem statements.
“AI in Infra” Market
While “infra of AI” may have dominated headlines and funding in 2023 and 2024, we are seeing an increasing emphasis on “AI in infra” companies in 2025. With breakout companies such as Cursor, it is undeniable that value is accruing to the app layer, and there is a continued push for purpose-built solutions tailored to different industry verticals and company functional areas.
On the infra team, we have historically focused on products for software engineering, DevOps & SRE, data & analytics, IT, cybersecurity, and QA teams. Today, each of these sectors now has an initial cohort of companies building solutions–often agentic in nature–to make operations within these functional areas much more efficient. This ranges from helping with code generation in software engineering to facilitating alert investigation in cybersecurity – all tasks that were previously human-labor intensive.
The ~current view of the “AI in infra” landscape:
While we believe there is significant ROI potential in each of these areas, they’re all at various stages of AI adoption and maturity today. Among these, we see that the dev tools category has by far the greatest level of adoption, with code generation as the dominating use case. At Headline, we have been bullish on this use for AI since our initial investment in Tabnine in 2019.
Aside from dev tools, we believe each of the other “AI in infra” categories are still nascent. Each category faces its own barriers to achieving widespread adoption. For example, the mission-critical nature of cybersecurity raises the bar on what’s needed before organizations can meaningfully leverage AI, while the importance of accuracy and reliability for user experience presents a challenge for data & analytics.
To invest in these categories in 2025, we’re focused on the following considerations:
Adoption & ROI
In validating product-market fit, we look for initial signs of adoption and value delivery. Are startups’ preliminary use cases resonating with customers and driving real value? We evaluate this both qualitatively through customer feedback and quantitatively through customer-level usage data (e.g. number of queries successfully answered by customers over time). While we acknowledge these products will deepen and evolve with time, we are eager to see that a current iteration is working and resonating with early customers.
Roadmap and market sizing
As startups often do, many companies initially solve rather narrow, limited problems while they prove product-market fit. However, we look to invest in teams with big visions to expand scope and unlock venture-sized outcomes over time. To do this, they may need to solve substantially more complex use cases, deliver meaningful ROI, and serve a wide enough universe of enterprise customers. For example, many products across these spaces begin with rather simple, low-risk use cases such as enrichment and triaging before targeting more complex use cases over time after building trust and collecting useful data. Others may resonate specifically with specialized teams (e.g. platform engineering, product security) or target customers with specific tech stacks (e.g. companies who have already defined a semantic layer). In these situations, we often look to build confidence in a team’s ability to expand its market over time.
Competition
In the “AI in infra” universe, incumbents are often equally as critical to evaluate as direct competitors. The key question becomes which is better positioned to solve a given problem, an AI-native solution that can iterate rapidly with AI in mind or a scaled incumbent that is already deeply entrenched in customers’ workflows? The answer to this question ultimately varies by functional area and depends on 1) the strength of incumbents; and 2) the level of disruption imposed by AI. With several companies building in parallel across these functional areas at similar stages, we must also be thoughtful in understanding a company’s right to win. Does the team have deep domain expertise, incomparable speed, and an intimate understanding of customer needs? Are there early network effects that deeply embed a product across customers’ organizations quickly?
Where We Plan to Spend Time
Based on the considerations above, we have identified a few areas that are particularly exciting for us as we approach AI infra investing in its current state.
For “infra of AI,” we are prioritizing areas where we foresee durability, depth of product and value-add, and early signals of product-market fit across developers and enterprises. We strive to find a clear need for purpose-built, AI-native, standalone solutions for a specific point in the value chain. With this in mind, we are interested in companies solving key problems for agents, as we believe here combines the greatest potential with today’s most significant unsolved infrastructure problems. We are currently particularly interested in the following areas of agent infrastructure: authentication; integration with external apps and the MCP ecosystem; runtimes and sandboxed environments; and orchestration.
For “AI in infra,” we are prioritizing areas where AI-native players have clear right-to-win versus incumbents, early evidence of value delivery and ROI across enterprise customers, and initial signs of product-market fit. It’s still early days across the functional areas we cover in infra, because accuracy, reliability, and trust are all immature. However, we believe there are particularly interesting dynamics at play within DevOps & SRE and QA & testing in particular. Both spaces lack centralized “system of action” platforms today and require substantial human capital without clear top-line value add – the QA & testing market is still dominated by services spend, and the DevOps & SRE stack is notoriously disjointed.
Having backed companies such as Mistral, NGINX, Bitwarden, and DBeaver, we are also longstanding proponents of open source software and are thus always intrigued by companies with open source heritage in both “infra of AI” and “AI in infra.”
Though this is our current thought framework, we also remain momentum-driven in this market - we can identify product-market fit more reliably than we can predict where markets are evolving. We thus will continue to learn and evaluate opportunities across the landscape. If you are building in any of these areas, please get in touch with us by reaching out to jacob@headline.com and ram@headline.com.