I build systems at the moments before the market understands they are needed.
That has been the through-line of a fifteen-year career spanning platform infrastructure, autonomous systems, distributed optimization, IoT edge compute, and now the foundational architecture of AI itself.
I started at Apple in product, working on the scalable infrastructure decisions that defined how software reaches users at scale — the Mac App Store and the freemium model that became iCloud. From there I joined Amazon as founding hire #3 on the team that built Amazon Robotics and SCOT. Amazon Robotics was physical AI before the term existed: autonomous mobile robots navigating dynamic warehouse environments, real-time path planning under human–robot interaction constraints, physical intelligence operating at industrial scale. SCOT was a large-scale distributed AI and operations research engine — stochastic demand modeling, multi-echelon inventory optimization, real-time fulfillment orchestration across a global network — responsible for $30B+ in cumulative operational savings that continues to compound today.
As a founder, I have consistently worked at the systems layer. Fridgio was an IoT edge compute platform: distributed sensor networks on physical assets, real-time telemetry pipelines, edge state management under connectivity constraints. GoCrazy was a distributed systems problem in a constrained infrastructure environment — offline-first payment rails, mobile-first architecture, resilient to network partitions across an unbanked population. As Founder and GP of Driving Forces, I invested at the infrastructure and physical AI layer before the market followed: six unicorns including SpaceX, Figure AI, and xAI, at 30%+ Net IRR.
I am now building at the deepest technical layer I have worked at. One company addresses the global inference bottleneck — disaggregated inference across heterogeneous accelerator clusters, KV cache management under memory bandwidth constraints, speculative decoding tradeoffs, and the scheduling primitives required to deliver reliable, low-latency AI inference at planetary scale. The other is a foundational AI systems research laboratory working at the intersection of computational physics and machine intelligence — studying how intelligence emerges when computation becomes physically heterogeneous.
The through-line across all of it: I work at the layer where physics, systems, and intelligence meet.
“Sidney has a rare combination of deep technical credibility and genuine investment instinct. He sees the infrastructure layer before most people know it exists.”
Building a foundational AI systems research laboratory studying a central scientific question: how does intelligence emerge when computation becomes physically heterogeneous? For most of modern computing history, computation could be treated as relatively uniform — software, compilers, distributed systems, and AI infrastructure were built on the assumption that execution environments were predictable enough to abstract away most hardware differences. That assumption is now breaking. The modern compute environment is fragmenting across GPUs, TPUs, domain-specific ASICs, photonic processors, neuromorphic systems, analog compute, robotics processors, and sovereign silicon initiatives. In this environment, computation itself becomes an object of study — not models alone, not chips alone, but computation: how it behaves across heterogeneous hardware, how it should be structured across asymmetric environments, how it should adapt under changing conditions, and how hardware and intelligence increasingly shape one another over time. The lab sits at the intersection of machine learning, distributed systems, computer architecture, compilers, high-performance computing, reinforcement learning, control systems, performance engineering, and computational physics. It operates with a dual mandate: ask foundational scientific questions about heterogeneous computation, and build systems that embody and test the resulting ideas under real infrastructure constraints. Operational systems produce traces, telemetry, failures, bottlenecks, and performance data that feed back into research. The core premise: the structure of computation shapes the space of possible algorithms. DeepMind for Compute.
Building the distributed infrastructure layer powering global AI inference — the missing layer between hyperscale AI training and real-world AI deployment. The structural shift driving this: AI is entering the inference era. Training infrastructure built the first phase of AI. Inference infrastructure will power the next. The two have fundamentally different requirements. Training runs on centralized hyperscale campuses of 200 MW to 1 GW, optimized for throughput over weeks-long runs. Inference demands low latency, distributed demand coverage, tight coupling with power availability, and the ability to deploy and scale rapidly across regions. The architecture we are building is a network of modular, inference-optimized compute nodes in the 5–20 MW range — purpose-built for the workload characteristics of real-time AI deployment, not retrofitted from training infrastructure. Each node is designed from first principles to minimize variability and maximize deployment speed, enabling faster time-to-power than traditional data center development by a significant multiple. The platform scales horizontally across nodes rather than vertically within a single campus, targeting a globally distributed network of inference capacity. The first infrastructure node is fully contracted and operational. The active pipeline represents 100 MW+ of identified demand. The platform target is 1 GW — 100 nodes deployed globally.
Led $250M+ in AI venture deployments at the world's first cross-VC and cross-CVC AI investor network — a coalition of the most technically sophisticated corporate venture arms in technology, including NVIDIA, Salesforce Ventures, Microsoft M12, Intel Capital, Samsung, MongoDB, and SignalFire. Handwave was built on the premise that the most consequential AI investments would be made by institutions with deep technical surface area across the full AI stack: silicon, systems, models, and applications. Focused deployment on frontier AI infrastructure, autonomous systems, and scalable compute — the layers where durable value compounds.
Founded a deliberately contrarian early-stage deep tech fund at a moment when the venture consensus was retreating from hardware, robotics, AI infrastructure, and physical systems — dismissing them as too capital-intensive and too specialized. The investment thesis was built on a systems-level view of where durable technological value accumulates: not at the application layer, but in the foundational infrastructure, autonomous systems, and physical AI that everything else runs on. Deployed $10M across ~25 companies. Six portfolio companies achieved unicorn status — SpaceX, Figure AI, xAI, Cart.com, OpenSea, and Rain — validating the thesis that physical AI, autonomous systems, and scalable infrastructure were the defining technology categories of the decade. Fund 1 delivered 30%+ Net IRR. Published analysis of the deep tech investment cycle in TechCrunch and syndicated by Yahoo Finance and POCIT, July 2024.
Founded GoCrazy as a fintech-powered social commerce platform built to solve digital commerce access for Pakistan's largely unbanked population — a distributed systems problem in one of the world's most constrained infrastructure environments. Built a fully integrated mobile commerce ecosystem designed for low-bandwidth, intermittent-connectivity conditions: a consumer-facing gamified marketplace, a driver coordination app, a pickup point management system, an operating system dashboard, and proprietary offline-first payment infrastructure enabling purchasing without a credit or debit card. The architecture had to be resilient to network partitions, optimized for mobile-first edge conditions, and capable of processing payments across a population where traditional financial rails were structurally inaccessible. Reached 1M monthly active users and $20M GMV. Backed by Plug & Play Capital, Century Oak Ventures, and partners from Andreessen Horowitz and JD.com.
Co-founded Fridgio out of the MIT entrepreneurial ecosystem — built on the conviction that cold-chain logistics, one of the world's most consequential physical infrastructure problems, had never received a serious systems-level solution. Built a digital freight brokerage and IoT SaaS platform connecting shippers to a network of sensor-instrumented refrigerated trucks, delivering real-time temperature telemetry, load visibility, and logistics state management across food, pharmaceutical, and chemical supply chains. The technical core was an IoT edge compute layer: distributed sensor networks on physical assets, real-time telemetry pipelines, edge state management under connectivity constraints, and a cloud-side orchestration layer for logistics decision-making. Backed by Right Side Capital and Forum Ventures. Reached $100M GMV. Secured strategic partnerships with Americold and Lineage Logistics — two of the largest cold-chain operators in the world. Successful asset sale.
Founded and scaled GroomGraph — a dual-sided marketplace and business management platform connecting consumers with beauty professionals across the US. Built a fully integrated system spanning a consumer mobile app, a professional-facing business management suite, and proprietary POS hardware — replacing the disconnected, largely offline operational infrastructure most beauty businesses ran on. The platform handled booking, payments, payroll, inventory, and marketing in a unified architecture, processing transactions across a fragmented, cash-heavy industry that had never had scalable digital infrastructure. Bootstrapped to 500K end users, 25K+ verified professional profiles, and $10M in payments processed. Acquired by ZMC Private Equity, which deployed the platform as infrastructure for a roll-up strategy across independent beauty businesses.
First dedicated supply chain systems executive in company history — a role created specifically for this mandate, filled by recruiting directly from Amazon. Reporting to the EVP and Chief Supply Chain Officer. Designed and built Voyager from the ground up: a custom cloud-based SaaS supply chain management platform that unified Cabela's previously siloed operational systems — spanning inventory management, warehouse management, distribution, payments, track and trace, marketplace, and core operating systems — into a single integrated architecture. The core engineering challenge was replacing a deeply entrenched legacy AS/400 infrastructure across a bi-national operation while maintaining continuity of a complex, high-volume retail supply chain. Deployed fully across US and Canadian operations within 14 months. Built and led a unified systems organization of 53 direct reports. Bass Pro Shops initiated acquisition discussions in late 2015 — mere months after departure — and acquired Cabela's at approximately $5.5B in 2017, with next-generation supply chain infrastructure cited as a key driver of the company's valuation.
Joined as founding hire #3 on the team that built two of the most consequential infrastructure systems in the history of modern logistics — work that established the architectural patterns now underlying the most sophisticated supply chain intelligence and physical AI systems in the world. Amazon Robotics: the autonomous fulfillment system that redefined warehouse operations at global scale — autonomous mobile robots navigating dynamic physical environments, real-time path planning under high-density human-robot interaction constraints, and the physical AI primitives that now underpin the entire autonomous systems industry. SCOT (Supply Chain Optimization Technologies): a large-scale distributed AI and operations research engine modeling demand, managing multi-echelon inventory positioning, and orchestrating fulfillment decisions across Amazon's global network — combining stochastic optimization, real-time data pipelines, and constraint-solving at a scale and complexity that had never been attempted. SCOT is responsible for $30B+ in cumulative operational savings and continues to run and compound today.
Worked in product on the platform architecture decisions that defined how software reaches users at scale on Apple platforms. Contributed to the launch of the Mac App Store — the distribution infrastructure that redefined software delivery for an entire industry, requiring new approaches to content delivery, digital rights management, payment processing, and developer ecosystem design at global scale — and originated the freemium model that became iCloud. These were not product decisions in the conventional sense. They were scalable infrastructure decisions about how software is built, priced, distributed, and monetized at scale — decisions whose architectural consequences are still compounding across the industry today. The Mac App Store recorded 1 million downloads in its first 24 hours and 100 million within its first 12 months. The freemium model originated for iCloud is now used by over one billion people.
Currently building in stealth at the AI infrastructure and foundational AI systems layer.
Open to conversations with investors aligned with frontier compute and AI infrastructure.
