Startup Orbital today announced funding from a16z speedrun to support Orbital-1, the company’s first test mission in 2027 aimed at deploying data centers in space, and its Factory-1 R&D facility in Los Angeles. It plans to build and operate AI (artificial intelligence) data centers in space using solar power and radiative cooling to remove the energy and cooling constraints that limit terrestrial AI infrastructure.

“Speedrun backs founders to explore ambitious ideas; the harder the problem, the better,” said Andrew Chen, General Partner at a16z speedrun. “Orbital is taking on AI’s biggest constraint with a bold and radical idea.”

According to Orbital, the demand for AI compute is surging, but the bottleneck is no longer chips; it’s the power required to run them. The company was founded on the belief that the only way to scale compute and unlock future progress on AI is to stop competing for power on Earth and generate it in orbit.

“AI progress is being constrained by the grid,” said Euwyn Poon, CEO and Founder of Orbital. “Data center economics are dominated by electricity and cooling, and both are getting harder. In orbit, solar power is continuous, and cooling is fundamentally different. Orbital is building compute infrastructure that removes the energy ceiling and scales with AI’s potential.”

Poon is a Cornell-educated engineer and lawyer who previously founded Spin, the micromobility company acquired by Ford. At Spin, he led the building and deployment of hundreds of thousands of small electric vehicles across 100 cities and scaled the business to over $100 million in revenue. After exiting Spin, he began investing in AI infrastructure and saw the impending constraint clearly.

“The energy ceiling on AI isn’t theoretical; it’s a real constraint that will impede the advancement of intelligence,” said Poon. “This is the solution.”

Orbital is designing and manufacturing a constellation of satellites to operate in low Earth orbit, each housing a cluster of Nvidia-powered servers. Each satellite houses a small cluster of Nvidia Space-1 Vera Rubin GPUs and is powered by solar arrays and cooled by radiating heat directly into space. In orbit, solar power is available 24/7 in sun-synchronous orbit and stronger, with no weather, night, and dependence on the power grid.

Orbital’s compute infrastructure is designed around a specific technical insight. Training large AI models requires thousands of tightly coupled GPUs, communicating at near-zero latency. That architecture does not translate to satellites. Inference is different. Each request is handled independently, and capacity can be distributed across many nodes. The company is focused on inference, where orbital compute can scale as a constellation and serve workloads in parallel.

Its first satellite, Orbital-1, is scheduled to launch on a SpaceX Falcon 9 in April 2027. Its primary goal is to validate sustained GPU operation in orbit, test radiation hardening, and run AI inference workloads commercially in space post-validation. The company is also in the process of filing with the FCC to deploy a constellation of satellites for orbital AI compute infrastructure.

At its GTC event in March, Nvidia announced that the Space-1 Vera Rubin Module is the latest part of its accelerated platform for space. Compared with its H100 GPU, the Rubin GPU on the module delivers up to 25 times more AI compute for space-based inferencing, enabling next-generation compute for ODCs (orbital data centers), advanced geospatial intelligence processing, and autonomous space operations.

“Space computing, the final frontier, has arrived,” said Jensen Huang, Founder and CEO of Nvidia. “As we deploy satellite constellations and explore deeper into space, intelligence must live wherever data is generated. AI processing across space and ground systems enables real-time sensing, decision-making, and autonomy, transforming orbital data centers into instruments of discovery and spacecraft into self-navigating systems.”

The Space-1 Vera Rubin Module is engineered to deliver data-center-class AI at scale, enabling LLMs (large language models) and advanced foundation models to operate directly in space. Its tightly integrated CPU-GPU architecture and high-bandwidth interconnect provide the performance and memory needed to process massive data streams from space-based instruments in real time.