Compressor APIIntroductionGetting Started

Introduction to the Compressor API

The Compressor API provides a unified, high-performance platform for shrinking, optimizing, and transforming virtually any digital file. From large videos and high-resolution images to PDFs, audio, documents, archives, and full website bundles, Compressor delivers reliable, automated compression through simple HTTP requests — built for developers, SaaS platforms, and enterprise-scale pipelines.

Designed with speed, security, and predictable output in mind, the Compressor API enables engineering teams to dramatically reduce file sizes without sacrificing quality or compatibility. Whether you're optimizing uploads for a web application, preparing assets for long-term storage, or powering a batch-compression workflow, our API offers a streamlined, scalable foundation.

With fast cloud processing, GDPR-compliant infrastructure, and predictable performance at scale, Compressor is engineered to help you build faster applications while cutting storage and bandwidth costs — an increasingly critical advantage for SEO, performance budgets, and user experience.

About This Documentation

This guide introduces the core concepts behind the Compressor API, explains request flow and processing patterns, and provides detailed reference material for each major compression and transformation feature.

You'll find:

  • A technical overview of the compression engine
  • Step-by-step examples using Upload, Fetch, and External Storage modes
  • Guidance on handling responses, metadata, and error states
  • Best practices for optimizing throughput and avoiding rate-limit penalties

Use this page as your starting point; everything else builds on these fundamentals.

API Features

The platform supports a wide and continuously expanding set of operations, including:

  • Intelligent file compression for dozens of formats
  • Automatic format detection & conversion
  • Deep optimization pipelines for media, documents, and archives
  • Image and video resizing & cropping
  • Metadata extraction & reporting
  • Batch & asynchronous processing for large workflows
  • Direct-to-cloud exporting to your preferred storage provider

Every operation is designed to be deterministic, safe, and effortless to integrate — whether you're calling the API from a backend service, edge worker, mobile app, or no-code automation.

Supplying Files for Compression

You can provide input files to the Compressor API in two ways, depending on where your data originates. The first option is File Upload, where you send the raw binary data directly to the API. This approach works best when handling local files or user-generated uploads within your application.

Quick Upload Example

The example below uploads an image file, applies high-quality compression, and returns an optimized result using the API's default behavior.

curl https://api.compressor.app/1.0/upload \
    -X POST \
    -u your-api-key: \
    -F "file=@/path/to/image.jpg"

If your files already live online, the recommended approach is File Fetch. Instead of transferring binary data, you supply the API with a publicly accessible URL, and the system retrieves the file on your behalf. This method offers lower latency and reduced bandwidth usage, especially in large-scale or automated pipelines.

Quick Fetch Example

Below is a minimal example that fetches a document from a public URL, applies intelligent compression, and returns an optimized result.

curl https://api.compressor.app/1.0/fetch \
  -X POST \
  -u your-api-key: \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://www.example.com/document.pdf"
  }'

JSON Responses

Endpoints that return JSON response — such as /1.0/upload and /1.0/fetch — include:

  • File size before & after compression
  • Format & MIME information
  • Hashes for integrity checks
  • A temporary or permanent URL for retrieving the result

This makes it easy to integrate Compressor into automation pipelines, dashboards, or analytics systems.

Handling Compression Jobs

Most compression requests — such as medium-sized images, PDFs, or documents — complete quickly and can be returned directly in the HTTP response. For these cases, the API holds the connection open and replies with the standard JSON result payload once processing is complete.

However, heavier workloads like 4K video compression, large archives, or multi-file batches may exceed typical HTTP timeout limits. To ensure reliability, the Compressor API provides two mechanisms specifically designed for long-running jobs - Webhooks and Long Polling.

Additionally, the API will automatically switch to asynchronous mode for any input file larger than 32 MB, regardless of the file format or operation. In these cases, the request returns immediately with a job ID, and the final result must be retrieved asynchronously.

Webhooks

You can provide a callback URL, and the API will notify your server the moment a job finishes. The webhook payload includes the final status, metadata, and download URL, allowing you to react instantly without keeping a connection open. This is the recommended method for production environments.

Long Polling

If you prefer to avoid webhook infrastructure, you may periodically query the job status using a unique job ID. The API will return progress information and the final result once available. This offers a simple, pull-based alternative that avoids timeout issues.

Downloading Processed Files

The API returns a temporary download URL for the compressed file. Temporary files are automatically deleted after 2 hours.

https://api.compressor.app/1762266300/13b2a651-a459-4357-8746-8558a429e1a3/file.pdf

If you configure an external storage provider — such as S3, Azure Blob Storage, GCS, Cloudflare R2, DigitalOcean Spaces, or Backblaze B2 — the API will write results directly to your bucket:

https://my-bucket.s3.eu-central-1.amazonaws.com/file.pdf

This makes the Compressor API production-ready out of the box.

External Storage Integration

You can route your compressed outputs to any major object storage provider. Instead of passing sensitive credentials with every request, Compressor API uses a secure system called External Storage Connectors.

Before using external storage, you create a connector in the API Storage Connectors by adding the credentials for your preferred provider. These credentials are encrypted at rest using industry-leading, military-grade encryption, ensuring that your access keys are never exposed, transmitted with requests, or stored in your application code.

Each connector receives a unique connector ID, which you reference in your API requests. This allows the API to route processed files directly to the destination bucket or container without requiring you to embed secrets in your backend, environment variables, CI/CD pipelines, or client applications.

Supported providers include:

  • AWS S3
  • Google Cloud Storage
  • Microsoft Azure Storage
  • Cloudflare R2
  • DigitalOcean Spaces
  • Backblaze B2

Rate Limits

To ensure consistent performance for all customers, the API enforces rolling rate limits per API key.

  • Window: 15 minutes
  • Limit: 100,000 requests
  • ~110 requests per second sustained

Each response includes the following headers:

X-RateLimit-Limit: 100000   # Number of allowed requests in the current period
X-RateLimit-Remaining: 2120   # Number of remaining requests in the current period
X-RateLimit-Reset: 778   # Seconds remaining in the current period

If you exceed your allocation, the API will return:

HTTP/1.1 429 TOO MANY REQUESTS

Retry-After: 778

X-RateLimit-Limit: 100000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 778

{
    "success": false,
    "status": 429,
    "id": "594941b7-db16-45e1-a5b5-6e8cbf3b604f",
    "message": "Rate limit exceeded. Retry in 778 seconds."
}

If you have a high-volume batch need, our team can temporarily increase your limits.

Error Handling

The Compressor API uses industry-standard HTTP status codes with typical categories:

  • 2xx — Successful request
  • 4xx — Invalid or incomplete input
  • 5xx — Server-side processing error

Error responses include:

  • A human-readable message
  • An internal request ID
  • A success: false flag

Example:

{
  "success": false,
  "status": 422,
  "id": "0c3a7bf4-9986-4f4a-bb72-155e9111c5de",
  "message": "Incoming request body does not contain a valid JSON data"
}

Including the request ID when contacting support allows us to resolve issues quickly.

Validation Errors

When a request contains invalid or incomplete parameters, the API returns a 422 Unprocessable Entity response along with a detailed validation report. In addition to the standard error fields, the response includes an errors array that lists every parameter that failed validation. Each entry specifies the problematic field and a clear, human-readable message explaining the issue.

{
  "id": "ead0c78c-7412-4871-a912-e4b0b9e78b5e",
  "message": "Validation failed for one or more parameters",
  "errors": [
    {
      "field": "video.resize",
      "message": "When mode='auto', at least one of 'width' or 'height' must be specified."
    }
  ]
}
On this page