Jerusalem · Israel · 2026 · Pre-USPTO Wave 1

75% less VRAM.
Same model quality.

DreamNova builds a patent-novel sparse-tensor runtime that drops in above the existing AI inference stack. Architecture-independent. Method claim, not a chip claim.

Approach · 01

One software primitive.
Three compounding moats.

The seed thesis rests on a single anchor: a runtime primitive that sits one layer above memory-disaggregated AI servers (Majestic-class, UniFabriX-class). Three additional patent-candidates compound around it as the company matures. No chip swap required. No vendor lock-in.

Anchor 01 · Seed focus

Sparse-tensor runtime

Materializes only the active inference path in VRAM at runtime — the inactive remainder is reconstructed on-demand from a learned topological prior. Method claim, architecture-independent.

Anchor 02 · Year 2

Polymorphic compiler

A Rust-MIR compiler that emits routed binaries. Composes with the runtime to produce moving-target inference graphs — defensible against side-channel attacks.

Anchor 03 · Compliance

Constitutional hypervisor

ZKP-verified AI controls. EU AI Act high-risk systems will be required to buy this class of constitutional / verifiable AI controls from August 2026 onward.

Anchor 04 · Post-quantum

ZK-SNARK identity

Sub-200 ms identity primitive for the post-quantum migration. NIST PQC mandates lattice / ZKP rotation across enterprise infrastructure by 2030.

Benchmark plan · 02

What we will measure,
and against what.

DreamNova is pre-prototype. The runtime is a method claim awaiting USPTO Wave 1 (May 2026). What follows is the Year-1 benchmark plan — the measurements we will publish post-filing, the hardware they will run on, and the published baselines they will be compared against. All numbers below labelled "Projected" are design targets, not measured results.

Metric Baseline (published) ASL design target Status
VRAM footprint · 7B-class model · iso-quality Dense FP16 baseline · ~14 GB ~3.5 GB · 75% reduction Projected
VRAM footprint · 70B-class model · iso-quality Dense FP16 baseline · ~140 GB ~35 GB · 75% reduction Projected
Quality delta (perplexity, MMLU, GSM8K) Dense baseline · 0.0 Within ±2% of dense baseline Projected
Inference latency overhead Dense baseline · 1.00x Target ≤ 1.10x (10% overhead ceiling) Projected
vLLM PagedAttention · published memory savings ~20–30% over naive batching Composes with vLLM Published baseline
FlashAttention-2 · published throughput gain ~2x on long sequences Orthogonal — both compose with ASL Published baseline
Hardware target Single H100 80GB · single L40S · M-series Apple Year-1 plan
Baselines: vLLM (Kwon et al., 2023) · FlashAttention-2 (Dao, 2023) · Industry-standard inference benchmarks.
Live · public reference implementation  —  the measurement methodology used for these rows is shipped publicly today on github.com/CodeNoLimits/dreamnova-bench. Reviewers can clone the repo, run python3 bench.py on their own machine and reproduce the dense baseline numbers in under five minutes. The repository ships a Switch-style top-K activation reference baseline alongside — making the gap to the DreamNova design target visible and quantifiable.
Honesty disclosure. All "ASL design target" rows are theoretical projections derived from the patent claim and sparse-MoE literature. They are not measured results. The Year-1 milestone is precisely to publish the measured numbers on a Tier-1 inference partner's stack, with reproducible methodology, post-USPTO Wave 1 filing in May 2026.
Q2 2026 · Now

USPTO Wave 1

File the four anchor patent-candidates as provisionals. ILPO review in parallel. BIRD Foundation submission 14 May.

Q3 2026

Seed close + first hires

$3–5M seed round closed with deep-tech lead. ML-Systems Lead, Inference Engineering Lead, IP Counsel open.

Q4 2026

First measured benchmark

First-silicon runtime integration with a Tier-1 inference partner. Published methodology, reproducible measurements.

Q1–Q2 2027

Series A scoping

EIC Accelerator close (€2.5M). Open commercial licensing of cross-patents. Series A on benchmark-validated traction.

Why now · 03

Four forces, one infrastructure layer.

Frontier model parameter counts double roughly every six months. Memory bandwidth and rack interconnect do not. At the same time, post-quantum migration deadlines and EU AI Act compliance windows land on the same compute budget. DreamNova sits at the intersection.

Force 01 · Compute crunch

The memory wall

HBM3 capacity grows ~1.7× per year, while frontier model parameters grow ~10× per year. Memory has become the binding constraint, not FLOPs.

Force 02 · Post-quantum

NIST PQC migration

NIST PQC selection finalized · NSA CNSA 2.0 deadlines set · enterprises must rotate to lattice / ZKP primitives across the stack by 2030.

Force 03 · Regulation

EU AI Act · 02 Aug 2026

High-risk AI systems must demonstrate verifiable controls. Constitutional / ZKP-verifiable AI controls become a procurement requirement.

Force 04 · Memory-wall thesis

Ecosystem alignment

Memory-disaggregation startups (Majestic Labs, UniFabriX, NeuroBlade) have taken the silicon side. DreamNova is the runtime primitive that sits one layer above.

Team · 04

Two founders. Three roles to close.

Honest framing: DreamNova is two people deep on the IP. The seed funds the three hires that take us from research-stage to a venture-backed company — the gap is identified, not hidden.

DA

David Amor

Founder · CEO · Inventor of record

Founder, builder, and operator. Designed and shipped the patent canon between 2024–2026 from Jerusalem. Quantified shipping cadence — execution capacity is the team's load-bearing signal until the seed hires close.

  • Shipping velocity: 167 repositories created on github.com/CodeNoLimits across 2024–2026 (110 public, 57 private)
  • Inventor of record on the lead anchor patent-candidate
  • Direct relationships with IIA Tnufa + BIRD Foundation programme officers
  • Production launches: LeeCut Max (SaaS · 80 videos delivered), Librairie Breslev (e-commerce · live), Khavrouta (open-source voice shell)
AB

Ariel Belhadj

Co-inventor · Partner

Co-inventor named on the majority of the patent-candidates in the canon. Architectural partner on the IP since 2024. Technical depth on claim drafting and methodology.

  • Co-inventor across the anchor patent stack
  • Architectural partner since 2024
  • Technical writing + claim-drafting lead
+3

Hiring with the seed

Year-1 key hires · seed-funded · sourced

The team gap is part of the use-of-funds, not a surprise. Three roles identified, scoped, and pre-sourced — open after seed close. We're talking to candidates already.

  • ML-Systems Lead · ex-NVIDIA / Mellanox / Annapurna profile
  • Inference Engineering Lead · CUDA + TRT-LLM depth
  • IP Counsel · USPTO + EU first-to-file experience
Capital strategy · 05

Non-dilutive runway scoped before the equity ask.

Three programme applications stack to ~₪15M of potential non-dilutive capital before BIRD's 14 May filing window. Honest framing: applications, not awards. Seed round runs in parallel — we want institutional technical validation before equity dilution, not after.

₪1.95M
IIA Tnufa + Frontier Compute applications · Israel
$1M
BIRD Foundation US-IL track · filing window 14 May 2026
€2.5M
EIC Accelerator scoped · Q3 2026 filing planned
Contact · 06

Speak with the founder.

For investors, strategic partners, and licensing conversations. Detailed portfolio review is available to vetted parties under NDA — no patent-candidate detail is published before USPTO filing.

Founder
David Amor
Location
Jerusalem · Israel
IP Notice

The DreamNova patent canon is composed of 23 patent-candidates pre-filing through USPTO Wave 1 (May 2026 onward). No claim language, fabrication parameters, or method-specific details are published on this site. Detailed portfolio review is available to vetted investors and partners under NDA. Year-1 measured benchmarks will be published post-filing on a Tier-1 inference partner's stack.