Introduction

Eventual is a platform for processing multimodal data that lets you:
  • Run Jobs (data processing pipelines) with simple Python functions
  • Process images, video, audio, and text at scale using our multimodal engine, daft
  • Scale automatically from single files to millions of items
  • Use powerful ML models and computer vision operations out of the box
  • Deploy to your cloud with minimal infrastructure setup
  • Monitor and manage your data pipelines from a web dashboard
You get the full serverless experience of a cloud data warehouse but for all the other modalities of data. The best part? Everything is just Python code - no YAML, no containers to build, no JVM, no infrastructure to manage.

Getting started

  1. Reach out to the Eventual team to get early access to the platform.
  2. Install the ev SDK: pip install ev-sdk
  3. Authenticate: ev auth login which will open your browser to login
…and you can start processing data right away. Check out some of our simple getting started examples:

Quick verification

Test that everything is working:
ev --version
ev auth status

How does it work?

When you write code with Eventual, here’s what happens behind the scenes:

1. Write your data processing logic

Express your data processing logic using daft (our multimodal query engine) along with any external resources you need like LLM APIs, vector databases, or cloud storage. Everything is declarative - you just say what you want, not how to do it.

2. Automatic bundling

Eventual automatically bundles:
  • Your daft queries and transformations
  • Local files and artifacts your code needs
  • Python dependencies and environment configurations
  • Connections to external services (OpenAI, Pinecone, S3, etc.)

3. Smart cloud orchestration

Your packaged job is sent to our cloud where we:
  • Analyze your workload holistically to understand resource requirements
  • Map your job to exactly the right amount of hardware (CPUs, GPUs, memory)
  • Automatically provision and configure all the resources you reference
  • Handle authentication, retries, and fault tolerance

4. Zero manual configuration

You don’t need to:
  • Specify instance types or cluster sizes
  • Configure API endpoints or credentials
  • Manage container orchestration
  • Deal with distributed computing complexity
The result? Your Python code that processes 10 images on your laptop automatically scales to process 10 million images in the cloud, with all the complexity handled for you.

Need help?