<aside>

Overview

:ambient-logo-original: Docs Home

What is Ambient?

Getting Started

Testnet

App

API

On-Chain

Tool Oracle

Claude Code

Miners

Encyclopedia

Solana Quickstart

Developers

OpenAI SDK

Anthropic SDK

API Docs

API Playground

Links

🌐 Homepage

✉️ [email protected]

:discord-symbol-blurple: Discord

:github-mark-white-bg: github.com/ambient-xyz

:x-logo-black: @ambient_xyz

:x-logo-black: @IridiumEagle

</aside>

The Ambient Auction Program

Introduction

The auction program is the native SBF program responsible for creating, bundling, and auctioning, and verifying inference requests that a user makes. In this overview we show the API (Program States ) that defines the layout of the auction accounts states, instruction data, and account ordering. Following that we go through an interface(Auction Interface — Developer Usage Guide) that facilitates interactions with the program with an example user flow.

Note: for some additional layer of operations on the input or output of inference requests an oracle program(Tool Oracle) is utilized.

Program States

Getting Started

From your project folder:

cargo add ambient-auction-api --git <https://github.com/ambient-xyz/auction-api>

This will add the ambient-auction-api to your Cargo.toml file. This crate contains the account and data structures expected by each of the instructions of the ambient auction program.

State

JobRequest

Account created and owned by the auction program to keep state related to a user inference request.

pub struct JobRequest {
    /// The public key of the bundle this request is participating in
    pub bundle: Pubkey,
    /// The maximum price per output token
    pub max_price_per_output_token: u64,
    /// The maximum output token this request accepts
    pub max_output_tokens: u64,
    /// Context length tier type
    pub context_length_tier: RequestTier,
    /// Expiry duration tier type
    pub expiry_duration_tier: RequestTier,
    /// The public key of the job requester.
    pub authority: Pubkey,
    /// An [IPFS content identifier](<https://docs.ipfs.tech/concepts/content-addressing/>) of the metadata necessary to complete the job.
    pub input_hash: Pubkey,
    pub input_hash_iv: [u8; 16],
    /// seeds used to derive the request PDA
    pub seed: [u8; 32],
    /// bump for the request account
    pub bump: u64,
    /// Output tokens generated
    pub output_token_count: u64,
    /// Request input tokens
    pub input_token_count: u64,
		/// Current lifecycle stage of the job request.
		/// 
		/// Indicates whether the request is still awaiting inference,
		/// pending verification, or already completed.
    pub status: JobRequestStatus,
		/// Verification-related state for this job request.
		/// 
		/// Tracks verifier assignments, token ranges, hashes, and progress
		/// through the verification process.
    pub verification: VerificationState,
    /// account used to store the input data for the inference job request
    pub input_data_account: Option<NonZeroPubkey>,
    /// account used to store the output data for the inference job request
    pub output_data_account: Option<NonZeroPubkey>,
}

/// Represents the lifecycle status of a job request.
pub enum JobRequestStatus {
    /// The request has been created and is waiting for inference output.
    WaitingForOutput = 0,
    /// The inference output has been generated and is awaiting verification.
    OutputReceived = 1,
    /// The output has been verified and the request is completed.
    OutputVerified = 2,
}

/// Holds all data required to manage and track verification of a job request.
/// 
/// Includes:
/// - Merkle root of the job’s output data.
/// - Assigned verifiers and their corresponding token ranges.
/// - Individual verifier states and verified token counts.
/// - Output hash for integrity checks (optionally encrypted).
/// - Initialization vectors (IVs) for optional encryption of the output hash
///   and Merkle root, using a shared secret between client and ambient.
pub struct VerificationState {
    pub merkle_root: [u8; 32],
    pub assigned_verifiers: [Pubkey; VERIFIERS_PER_AUCTION],
    pub assigned_verifiers_token_ranges: [u64; VERIFIERS_PER_AUCTION * 2],
    pub verifier_states: [JobVerificationState; VERIFIERS_PER_AUCTION],
    pub verified_tokens: [u64; VERIFIERS_PER_AUCTION],
    pub output_hash: [u8; 32],
    /// output hash and merkle root may be encrypted with a shared secret + iv,
    /// where shared_secret = ambient private key x client public key
    /// and IV is a random byte array (a nonce in crypto terms)
    ///
    /// encryption is used iff `encryption_iv` != [0; 16]
    pub output_hash_iv: [u8; 16],
    pub merkle_root_iv: [u8; 16],
}

NOTE: more on RequestTier in the RequestBundle section.

RequestBundle

Economically similar requests are bundled together. Request similarity is based on their context length and expiry duration RequestTier. Each (context length, expiry duration) pair has an associated “bundle chain” (explained later).

Once a bundle is concluded (marked as BundleStatus::Cancelled or BundleStatus::Filled), a new child bundle with an identical (context length, expiry duration) pair is created from it.

pub struct RequestBundle {
    /// Current status of the bundle
    pub status: BundleStatus,
    /// Context length tier type
    pub context_length_tier: RequestTier,
    /// Expiry duration tier type
    pub expiry_duration_tier: RequestTier,
    /// The auction for this bundle.
    pub auction: Option<NonZeroPubkey>,
    /// Assigned verifiers for this bundle.
    pub verifiers: Verifiers,
    /// The slot after which the auction cannot receive any more bids 
    /// and is considered ended.
    pub expiry_slot: u64,
    /// The maximum input tokens each request can have
    pub max_context_length: u64,
    /// Total number of requests contained in this bundle.
    pub requests_len: u64,
    /// The number of job requests that were successfully verified
    pub num_verified_requests: u64,
    /// limit how much time winning bidder can take to submit all jobs
    pub job_submission_duration: u64,
    /// Total amount commited by the requesters
    pub request_committed_amount: u64,
    /// Total input tokens in the requests
    pub total_input_tokens: u64,
    /// Maximum output tokens to be generated for the requests
    pub maximum_output_tokens: u64,
    /// Total output tokens generated for the requests
    pub output_tokens_generated: u64,
    /// the parent bundle key is bundle is derived from
    pub parent_bundle_key: Pubkey,
    /// The child bundle key to be derived from this bundle
    pub child_bundle_key: Option<NonZeroPubkey>,
    /// bump for this bundle account
    pub bump: u64,
    /// payer key for the bundle account creation
    pub payer: Pubkey,
    /// The clearing price from the concluded auction for this bundle.
    /// Denotes the payment rate (in lamports) per output token that the
    /// winning bidder will receive for fulfilling the bundle’s requests.
    pub price_per_output_token: Option<NonZeroU64>,
}

Auction

A bundle is put up for auction once it is created. The auction follows a second-lowest reverse auction: instead of the lowest bid winning, the second-lowest bidder is selected.

The winning bidder must execute every inference request in the bundle and submit the results. These outputs are then verified by a set of randomly assigned verifiers, selected proportionally to their network contribution (i.e. Lstake amount).

pub struct Auction {
    /// Context length tier type
    pub context_length_tier: RequestTier,
    /// Expiry duration tier type
    pub expiry_duration_tier: RequestTier,
    /// Bundle of requests for this auction.
    pub request_bundle: Pubkey,
    /// The slot after which the auction cannot receive any more bids
    /// and is considered ended.
    pub expiry_slot: u64,
    /// The maximum input tokens each request can have
    pub max_context_length: u64,
    /// The lowest bid price submitted
    pub lowest_bid_price: Option<NonZeroU64>,
    /// The second-lowest bid price submitted
    pub winning_bid_price: Option<NonZeroU64>,
    /// The public key of the winning bid account
    pub winning_bid: Pubkey,
    /// The public key of the lowest priced bid account
    pub lowest_bid: Pubkey,
    /// Current status of the auction
    pub status: AuctionStatus,
    /// Total number of bids revealed
    pub bids_revealed: u64,
    /// Total number of concealed bids placed
    pub bids_placed: u64,
    /// Amount to be kept in each bid account as commitment,
    pub bid_commitment_amount: u64,
    /// bump for this auction account
    pub auction_bump: u64,
    /// The fee payer for creating this account
    pub payer: Pubkey,
}

BundleRegistry

For each RequestTier combination or “bundle lane” there is a corresponding Registry that keeps track of the current latest active bundle (accepting requests).

pub struct BundleRegistry {
    /// Context length tier type
    pub context_length_tier: RequestTier,
    /// Expiry duration tier type
    pub expiry_duration_tier: RequestTier,
    /// The latest bundle for this tier.
    pub latest_bundle: Pubkey,
    pub payer: Pubkey,
    /// bump used to derive this account
    pub bump: u64,
}

Auction Interface — Developer Usage Guide

The ‣ provides Pinocchio-based typed helpers to interact with the Auction Program through Cross-Program Invocations (CPIs).

Its job is to guarantee that your on-chain program always calls the Auction Program with:

This document explains how to use the interface, how to prepare the input data account, and how to perform the CPI to request an Ambient verified inference job.


Getting Started

From your project folder:

cargo add ambient-auction-interface --git <https://github.com/ambient-xyz/auction-interface>

This will add the ambient-auction-interface to your Cargo.toml file.

High-Level Responsibilities

What your off-chain client must do:

  1. Create a data account (owned by the Auction Program)
  2. Append input bytes to that account (this can also be done inside of a program)

What your on-chain program must do:

  1. Provide the correct typed accounts
  2. Call RequestJob via CPI using the interface
  3. Optionally read the resulting JobRequest account

End-to-End Job Request Flow

There are three stages in a verified inference request.


Stage 1 — Create & Initialize the Input Data Account

Before your program can call RequestJob, you must prepare a data account that contains the job’s input bytes.

Requirements:

Once created, you upload your input bytes using the Auction Program’s AppendData instruction (done client-side or through CPIs).

Full Example Client Code

use auction_api::Metadata;
use clap::Parser;
use rand::distr::Alphanumeric;
use rand::Rng;
use solana_client::nonblocking::rpc_client::RpcClient;
use solana_sdk::commitment_config::CommitmentConfig;
use solana_sdk::pubkey::Pubkey;
use solana_sdk::{
    message::{v0::Message, VersionedMessage},
    signature::{read_keypair_file, Keypair},
    signer::Signer as _,
    transaction::VersionedTransaction,
};
use std::{fmt::Display, path::PathBuf};

#[derive(Parser)]
struct Args {
    /// The keypair that will pay for the auction
    payer_keypair: PathBuf,
    /// The Solana RPC cluster URL. Defaults to <http://localhost:8899>
    #[arg(short = 'r', long)]
    cluster_rpc: Option<String>,
    /// data to be put onchain
    data_file: PathBuf,
}

fn strerr<E: Display>(arg: E) -> String {
    format!("There was an error: {arg}")
}

async fn create_data_account(
    client: &RpcClient,
    payer: &Keypair,
    data: &[u8],
) -> Result<(), String> {
    let space = data.len() + Metadata::LEN;
    let account_lamports = client
        .get_minimum_balance_for_rent_exemption(space)
        .await
        .map_err(strerr)?;

    let seed: String = rand::rng()
        .sample_iter(&Alphanumeric)
        .take(32) // length
        .map(char::from)
        .collect();

    let data_account =
        Pubkey::create_with_seed(&payer.pubkey(), &seed, &ambient_auction_client::ID)
            .map_err(strerr)?;
    let mut instructions = vec![
        solana_system_interface::instruction::create_account_with_seed(
            &payer.pubkey(),
            &data_account,
            &payer.pubkey(),
            &seed,
            account_lamports,
            space as u64,
            &ambient_auction_client::ID,
        ),
    ];
    eprintln!("creating new data account");

    let mut offset = Metadata::LEN as u64;

    for (i, data) in data.chunks(1000).enumerate() {
        offset += i as u64;
        let ix = ambient_auction_client::sdk::append_data(
            payer.pubkey(),
            data,
            &seed,
            offset,
            data_account,
            None,
        );

        instructions.push(ix);
        let tx = VersionedTransaction::try_new(
            VersionedMessage::V0(
                Message::try_compile(
                    &payer.pubkey(),
                    &instructions,
                    &[],
                    client.get_latest_blockhash().await.map_err(strerr)?,
                )
                .map_err(strerr)?,
            ),
            &[&payer],
        )
        .map_err(strerr)?;

        let sig = match client
            .send_and_confirm_transaction_with_spinner(&tx)
            .await
            .map_err(strerr)
        {
            Ok(sig) => sig,
            Err(e) => {
                eprintln!("Error submitting append data txn: {e}");
                continue;
            }
        };
        eprintln!("Appended data to account: {data_account} with Signature: {sig}");
        instructions.clear();
    }
    Ok(())
}

fn main() -> Result<(), String> {
    let args = Args::parse();
    let payer = read_keypair_file(args.payer_keypair).map_err(strerr)?;
    let rpc = RpcClient::new_with_commitment(
        args.cluster_rpc,
        CommitmentConfig::confirmed(),
    );
    let data = std::fs::read(args.data_file).map_err(strerr)?;
    tokio::runtime::Builder::new_current_thread()
        .enable_all()
        .build()
        .map_err(strerr)?
        .block_on(create_data_account(&rpc, &payer, &data))
}

What This Produces

After running your client:

This must happen before your program can submit a RequestJob.


Stage 2 — Call RequestJob via CPI (Inside Your On-Chain Program)

Inside your program’s instruction (e.g., RequestInstruction), you use the interface-provided accounts to build the CPI.

Pattern

let data = &self.data;
let accounts = &self.accounts;

// Build CPI account metas in the correct order
let metas = accounts
    .ambient_auction_accounts
    .to_account_metas()
    .collect::<Vec<_>>();

// Build CPI instruction
let ix = pinocchio::instruction::Instruction {
    program_id: &auction_api::ID,
    data: &data.ambient_request.to_bytes(),
    accounts: &metas,
};

// Provide raw AccountInfo references in the correct order
let account_infos: Vec<&AccountInfo> =
    accounts.ambient_auction_accounts.inner().iter().collect();

// Execute CPI
slice_invoke(&ix, &account_infos)?;

The interface ensures: