75% less VRAM.
Same model quality.
DreamNova builds a patent-novel sparse-tensor runtime that drops in above the existing AI inference stack. Architecture-independent. Method claim, not a chip claim.
One software primitive.
Three compounding moats.
The seed thesis rests on a single anchor: a runtime primitive that sits one layer above memory-disaggregated AI servers (Majestic-class, UniFabriX-class). Three additional patent-candidates compound around it as the company matures. No chip swap required. No vendor lock-in.
Sparse-tensor runtime
Materializes only the active inference path in VRAM at runtime — the inactive remainder is reconstructed on-demand from a learned topological prior. Method claim, architecture-independent.
Polymorphic compiler
A Rust-MIR compiler that emits routed binaries. Composes with the runtime to produce moving-target inference graphs — defensible against side-channel attacks.
Constitutional hypervisor
ZKP-verified AI controls. EU AI Act high-risk systems will be required to buy this class of constitutional / verifiable AI controls from August 2026 onward.
ZK-SNARK identity
Sub-200 ms identity primitive for the post-quantum migration. NIST PQC mandates lattice / ZKP rotation across enterprise infrastructure by 2030.
What we will measure,
and against what.
DreamNova is pre-prototype. The runtime is a method claim awaiting USPTO Wave 1 (May 2026). What follows is the Year-1 benchmark plan — the measurements we will publish post-filing, the hardware they will run on, and the published baselines they will be compared against. All numbers below labelled "Projected" are design targets, not measured results.
| Metric | Baseline (published) | ASL design target | Status |
|---|---|---|---|
| VRAM footprint · 7B-class model · iso-quality | Dense FP16 baseline · ~14 GB | ~3.5 GB · 75% reduction | Projected |
| VRAM footprint · 70B-class model · iso-quality | Dense FP16 baseline · ~140 GB | ~35 GB · 75% reduction | Projected |
| Quality delta (perplexity, MMLU, GSM8K) | Dense baseline · 0.0 | Within ±2% of dense baseline | Projected |
| Inference latency overhead | Dense baseline · 1.00x | Target ≤ 1.10x (10% overhead ceiling) | Projected |
| vLLM PagedAttention · published memory savings | ~20–30% over naive batching | Composes with vLLM | Published baseline |
| FlashAttention-2 · published throughput gain | ~2x on long sequences | Orthogonal — both compose with ASL | Published baseline |
| Hardware target | — | Single H100 80GB · single L40S · M-series Apple | Year-1 plan |
python3 bench.py
on their own machine and reproduce the dense baseline numbers in under five minutes.
The repository ships a Switch-style top-K activation reference baseline alongside — making the gap to the
DreamNova design target visible and quantifiable.
USPTO Wave 1
File the four anchor patent-candidates as provisionals. ILPO review in parallel. BIRD Foundation submission 14 May.
Seed close + first hires
$3–5M seed round closed with deep-tech lead. ML-Systems Lead, Inference Engineering Lead, IP Counsel open.
First measured benchmark
First-silicon runtime integration with a Tier-1 inference partner. Published methodology, reproducible measurements.
Series A scoping
EIC Accelerator close (€2.5M). Open commercial licensing of cross-patents. Series A on benchmark-validated traction.
Four forces, one infrastructure layer.
Frontier model parameter counts double roughly every six months. Memory bandwidth and rack interconnect do not. At the same time, post-quantum migration deadlines and EU AI Act compliance windows land on the same compute budget. DreamNova sits at the intersection.
The memory wall
HBM3 capacity grows ~1.7× per year, while frontier model parameters grow ~10× per year. Memory has become the binding constraint, not FLOPs.
NIST PQC migration
NIST PQC selection finalized · NSA CNSA 2.0 deadlines set · enterprises must rotate to lattice / ZKP primitives across the stack by 2030.
EU AI Act · 02 Aug 2026
High-risk AI systems must demonstrate verifiable controls. Constitutional / ZKP-verifiable AI controls become a procurement requirement.
Ecosystem alignment
Memory-disaggregation startups (Majestic Labs, UniFabriX, NeuroBlade) have taken the silicon side. DreamNova is the runtime primitive that sits one layer above.
Two founders. Three roles to close.
Honest framing: DreamNova is two people deep on the IP. The seed funds the three hires that take us from research-stage to a venture-backed company — the gap is identified, not hidden.
David Amor
Founder, builder, and operator. Designed and shipped the patent canon between 2024–2026 from Jerusalem. Quantified shipping cadence — execution capacity is the team's load-bearing signal until the seed hires close.
- Shipping velocity: 167 repositories created on github.com/CodeNoLimits across 2024–2026 (110 public, 57 private)
- Inventor of record on the lead anchor patent-candidate
- Direct relationships with IIA Tnufa + BIRD Foundation programme officers
- Production launches: LeeCut Max (SaaS · 80 videos delivered), Librairie Breslev (e-commerce · live), Khavrouta (open-source voice shell)
Ariel Belhadj
Co-inventor named on the majority of the patent-candidates in the canon. Architectural partner on the IP since 2024. Technical depth on claim drafting and methodology.
- Co-inventor across the anchor patent stack
- Architectural partner since 2024
- Technical writing + claim-drafting lead
Hiring with the seed
The team gap is part of the use-of-funds, not a surprise. Three roles identified, scoped, and pre-sourced — open after seed close. We're talking to candidates already.
- ML-Systems Lead · ex-NVIDIA / Mellanox / Annapurna profile
- Inference Engineering Lead · CUDA + TRT-LLM depth
- IP Counsel · USPTO + EU first-to-file experience
Non-dilutive runway scoped before the equity ask.
Three programme applications stack to ~₪15M of potential non-dilutive capital before BIRD's 14 May filing window. Honest framing: applications, not awards. Seed round runs in parallel — we want institutional technical validation before equity dilution, not after.
Speak with the founder.
For investors, strategic partners, and licensing conversations. Detailed portfolio review is available to vetted parties under NDA — no patent-candidate detail is published before USPTO filing.
The DreamNova patent canon is composed of 23 patent-candidates pre-filing through USPTO Wave 1 (May 2026 onward). No claim language, fabrication parameters, or method-specific details are published on this site. Detailed portfolio review is available to vetted investors and partners under NDA. Year-1 measured benchmarks will be published post-filing on a Tier-1 inference partner's stack.