VerifyFetch

Download large files.
Verify. Resume.

4GB AI model, network drops at 3.8GB? Resume from 3.8GB, not zero.
Drop-in integrations for Transformers.js and WebLLM. 2MB constant memory. Multi-CDN failover.

app.ts
import { verifyFetch } from 'verifyfetch';

const response = await verifyFetch('/model.bin', {
  sri: 'sha256-uU0nuZNN...'
});

// Throws if tampered. Zero config.

Why VerifyFetch?

fetch() has integrity, but it buffers the entire file first.

A 4GB AI model needs 4GB+ RAM just to verify the hash.

Large WASM modules and AI models? Native verification crashes your browser.

One CDN compromise = malicious code in your users' browsers.

It's happened before.
2024

Polyfill.io

100,000+ sites compromised via CDN takeover

2021

ua-parser-js

7M weekly downloads served malware

2018

event-stream

Bitcoin wallet credentials stolen

Why Not Just Use Native fetch({ integrity })?

Native fetch has basic SRI verification, but VerifyFetch adds streaming, resumable downloads, and fail-fast chunked verification.

FeatureNative fetchVerifyFetch
Basic SRI Verification
Progress Callbacks
Streaming Output
Service Worker Mode
Chunked Verification (Fail-Fast)
Multi-CDN Failover
Fallback URLs
Manifest System
CI/CD Enforcement

Service Worker mode lets you protect every fetch in your app with zero code changes. Just add one line to your service worker.

What You Get

Everything needed to download, verify, and resume large files in the browser.

Transformers.js + WebLLM

Drop-in integrations for HuggingFace Transformers.js and WebLLM. Verified model loading with zero code changes.

Resumable Downloads

Network fails at 80%? Resume from 80%. Progress persists to IndexedDB across page reloads.

Chunked Verification

Detect corruption at chunk 5, stop immediately. Don't download 3995 more chunks.

Streaming Output

2MB memory for a 4GB file. Process chunks as they arrive.

Service Worker Mode

Add one file, verify all fetches. No changes to existing code needed.

Multi-CDN Failover

Try CDN1, CDN2, CDN3. First verified response wins.

Progress Tracking

Bytes loaded, percent complete, speed, ETA. All in one callback.

Manifest System

One JSON file for all your hashes. CLI generates it from your files or HuggingFace models.

CLI Tools

Hash local files with sign, hash HuggingFace models with hash-model, enforce in CI with enforce.

Simple API, Real Protection

Multiple ways to protect your assets. Choose what fits your needs.

app.ts
import { verifyFetch } from 'verifyfetch';

// Verify a file against its SRI hash
const response = await verifyFetch('/model.bin', {
  sri: 'sha256-uU0nuZNNPgilLlLX2n2r+sSE7+N6U4DukIj3rOLvzek='
});

// That's it. Throws IntegrityError if hash doesn't match.
const model = await response.arrayBuffer();

Built for Critical Assets

Protect the files that power your application.

Transformers.js

Verified model loading for HuggingFace Transformers.js. Drop-in pipeline() replacement with integrity checks and resumable downloads.

npm install @verifyfetch/transformers

WebLLM

Run LLMs in the browser with verified model weights. Drop-in MLCEngine replacement that catches tampered or corrupted shards.

npm install @verifyfetch/webllm

WebAssembly

Verify .wasm modules before instantiation. Protect compiled code from supply chain attacks.

/engine.wasm

ONNX Runtime

Secure multi-GB ONNX model downloads. Resumable transfers that survive page reloads and network drops.

/models/model.onnx

Config Files

Ensure critical JSON and YAML files are not tampered with. Verify settings, schemas, and rules.

/config/settings.json

Any Binary

Fonts, images, datasets, safetensors. If you fetch it, verify it.

/assets/data.bin

Get Started in 30 Seconds

Four steps to protect your users from supply chain attacks.

1

Install

terminal
npm install @verifyfetch/transformers @huggingface/transformers
2

Generate a model manifest

terminal
npx @verifyfetch/cli hash-model Xenova/distilbert-base-uncased-finetuned-sst-2-english
3

Use it in your app

app.ts
import { verifiedPipeline } from '@verifyfetch/transformers';

const classifier = await verifiedPipeline(
  'sentiment-analysis',
  'Xenova/distilbert-base-uncased-finetuned-sst-2-english',
  { manifestUrl: '/models.vf.manifest.json' }
);
4

Enforce in CI

terminal
npx @verifyfetch/cli enforce --manifest ./models.vf.manifest.json

What is VerifyFetch?

VerifyFetch is an open-source JavaScript library for verified, resumable downloads of AI models and large files in the browser. It solves the problem of downloading multi-gigabyte files like AI models over unreliable networks: if the download fails at 3.8GB of a 4GB file, verifyfetch resumes from 3.8GB instead of starting over. Every chunk is verified against SHA-256 hashes during download, so corrupted or tampered files are caught immediately.

Unlike the native fetch() API with integrity, which buffers the entire file in memory before verifying, verifyfetch uses streaming verification with a constant 2MB memory footprint regardless of file size. This makes it practical for large ONNX models, WASM modules, and safetensors files that would otherwise crash the browser tab.

Transformers.js Integration

The @verifyfetch/transformers package provides drop-in verified model loading for HuggingFace Transformers.js. Use verifiedPipeline() as a direct replacement for pipeline() to get automatic integrity verification and resumable downloads for all model files. Alternatively, use enableVerification() to globally intercept all Transformers.js downloads through env.fetch with zero code changes to existing pipeline() calls. Supports Transformers.js v3 and v4.

npm install @verifyfetch/transformers @huggingface/transformers

WebLLM Integration

The @verifyfetch/webllm package provides verified model loading for MLC AI WebLLM. Use VerifiedMLCEngine as a drop-in replacement for MLCEngine to verify every model shard during download. Supports resumable downloads that persist across page reloads, so users do not lose progress on multi-gigabyte LLM downloads.

npm install @verifyfetch/webllm @mlc-ai/web-llm

How to Generate Model Hashes

Use the @verifyfetch/cli tool to generate integrity manifests for any HuggingFace model. The hash-model command downloads all files from a model repository and computes their SHA-256 hashes. Use the --chunked flag for large files to enable resumable verification.

npx @verifyfetch/cli hash-model Xenova/distilbert-base-uncased-finetuned-sst-2-english

Key Features

  • Resumable downloads that survive network failures and page reloads
  • Streaming verification with constant 2MB memory (not file-size dependent)
  • Chunked hashing with fail-fast corruption detection
  • Drop-in integrations for Transformers.js and WebLLM
  • Service Worker mode for automatic verification of all fetches
  • Multi-CDN failover with automatic retry across sources
  • CLI tools for hash generation and CI/CD enforcement
  • Pre-computed manifests for popular AI models

Frequently Asked Questions

How do I verify AI model downloads in the browser?

Use verifyfetch to verify AI model integrity during download. It provides drop-in integrations for Transformers.js and WebLLM that verify each file against SHA-256 hashes. Install with npm install @verifyfetch/transformers, then use verifiedPipeline() as a replacement for pipeline().

How do I add integrity verification to Transformers.js?

Install @verifyfetch/transformers and use verifiedPipeline() instead of pipeline(). It downloads and verifies all model files before loading them. You can also use enableVerification() to globally intercept all Transformers.js downloads. Generate a manifest with npx @verifyfetch/cli hash-model followed by the model ID.

How do I resume a failed large file download in JavaScript?

Use verifyFetchResumable() from verifyfetch. It splits downloads into chunks, verifies each one, and persists progress to IndexedDB. If the network drops or the page reloads, the next call resumes from the last verified chunk.

How do I verify WebLLM model integrity?

Install @verifyfetch/webllm and use VerifiedMLCEngine as a drop-in replacement for MLCEngine. It verifies every model file against SHA-256 hashes during download with resumable transfers.

What is the best way to download large files in the browser with integrity verification?

VerifyFetch is a JavaScript library designed for this. Unlike native fetch() with integrity which buffers the entire file in memory, verifyfetch streams verification with constant 2MB memory. It supports chunked verification, resumable downloads, multi-CDN failover, and Service Worker mode.

How do I prevent supply chain attacks on browser AI models?

Use verifyfetch to verify the integrity of every model file before loading it. Generate SHA-256 hashes with the CLI, then verify downloads at runtime. If any file is tampered with, verifyfetch blocks it before your application processes it.

Packages

  • verifyfetch - Core library for verified, resumable downloads
  • @verifyfetch/transformers - Transformers.js integration with verifiedPipeline() and enableVerification()
  • @verifyfetch/webllm - WebLLM integration with VerifiedMLCEngine
  • @verifyfetch/cli - CLI for generating hashes and manifests
  • @verifyfetch/manifests - Pre-computed integrity manifests for popular AI models