What is VerifyFetch?
VerifyFetch is an open-source JavaScript library for verified, resumable downloads of AI models and large files in the browser. It solves the problem of downloading multi-gigabyte files like AI models over unreliable networks: if the download fails at 3.8GB of a 4GB file, verifyfetch resumes from 3.8GB instead of starting over. Every chunk is verified against SHA-256 hashes during download, so corrupted or tampered files are caught immediately.
Unlike the native fetch() API with integrity, which buffers the entire file in memory before verifying, verifyfetch uses streaming verification with a constant 2MB memory footprint regardless of file size. This makes it practical for large ONNX models, WASM modules, and safetensors files that would otherwise crash the browser tab.
Transformers.js Integration
The @verifyfetch/transformers package provides drop-in verified model loading for HuggingFace Transformers.js. Use verifiedPipeline() as a direct replacement for pipeline() to get automatic integrity verification and resumable downloads for all model files. Alternatively, use enableVerification() to globally intercept all Transformers.js downloads through env.fetch with zero code changes to existing pipeline() calls. Supports Transformers.js v3 and v4.
npm install @verifyfetch/transformers @huggingface/transformers
WebLLM Integration
The @verifyfetch/webllm package provides verified model loading for MLC AI WebLLM. Use VerifiedMLCEngine as a drop-in replacement for MLCEngine to verify every model shard during download. Supports resumable downloads that persist across page reloads, so users do not lose progress on multi-gigabyte LLM downloads.
npm install @verifyfetch/webllm @mlc-ai/web-llm
How to Generate Model Hashes
Use the @verifyfetch/cli tool to generate integrity manifests for any HuggingFace model. The hash-model command downloads all files from a model repository and computes their SHA-256 hashes. Use the --chunked flag for large files to enable resumable verification.
npx @verifyfetch/cli hash-model Xenova/distilbert-base-uncased-finetuned-sst-2-english
Key Features
- Resumable downloads that survive network failures and page reloads
- Streaming verification with constant 2MB memory (not file-size dependent)
- Chunked hashing with fail-fast corruption detection
- Drop-in integrations for Transformers.js and WebLLM
- Service Worker mode for automatic verification of all fetches
- Multi-CDN failover with automatic retry across sources
- CLI tools for hash generation and CI/CD enforcement
- Pre-computed manifests for popular AI models
Frequently Asked Questions
How do I verify AI model downloads in the browser?
Use verifyfetch to verify AI model integrity during download. It provides drop-in integrations for Transformers.js and WebLLM that verify each file against SHA-256 hashes. Install with npm install @verifyfetch/transformers, then use verifiedPipeline() as a replacement for pipeline().
How do I add integrity verification to Transformers.js?
Install @verifyfetch/transformers and use verifiedPipeline() instead of pipeline(). It downloads and verifies all model files before loading them. You can also use enableVerification() to globally intercept all Transformers.js downloads. Generate a manifest with npx @verifyfetch/cli hash-model followed by the model ID.
How do I resume a failed large file download in JavaScript?
Use verifyFetchResumable() from verifyfetch. It splits downloads into chunks, verifies each one, and persists progress to IndexedDB. If the network drops or the page reloads, the next call resumes from the last verified chunk.
How do I verify WebLLM model integrity?
Install @verifyfetch/webllm and use VerifiedMLCEngine as a drop-in replacement for MLCEngine. It verifies every model file against SHA-256 hashes during download with resumable transfers.
What is the best way to download large files in the browser with integrity verification?
VerifyFetch is a JavaScript library designed for this. Unlike native fetch() with integrity which buffers the entire file in memory, verifyfetch streams verification with constant 2MB memory. It supports chunked verification, resumable downloads, multi-CDN failover, and Service Worker mode.
How do I prevent supply chain attacks on browser AI models?
Use verifyfetch to verify the integrity of every model file before loading it. Generate SHA-256 hashes with the CLI, then verify downloads at runtime. If any file is tampered with, verifyfetch blocks it before your application processes it.
Packages
- verifyfetch - Core library for verified, resumable downloads
- @verifyfetch/transformers - Transformers.js integration with verifiedPipeline() and enableVerification()
- @verifyfetch/webllm - WebLLM integration with VerifiedMLCEngine
- @verifyfetch/cli - CLI for generating hashes and manifests
- @verifyfetch/manifests - Pre-computed integrity manifests for popular AI models