Tracing API

Distributed tracing lets you follow a request across services. One trace contains many spans (units of work), and logs can be correlated to any span. Everything is buffered in memory and sent in a single request when the trace ends.

How it works

Trace → a single request or operation (e.g. an HTTP request, a background job).
Span → a unit of work inside a trace (e.g. a DB query, an LLM call).
Log → emitted via trace.log.* and automatically correlated to the trace ID.

Quick Start
Wrap any async operation in a trace
import { LightningLogsClient } from '@lightning-logs/sdk'

const client = new LightningLogsClient({
  baseURL: process.env.SUPABASE_URL!,
  getAuthToken: async () => process.env.LIGHTNING_LOGS_API_KEY!,
  serviceName: 'api',
  environment: 'production',
})

// Wrap an operation — trace and spans are flushed automatically
await client.tracer.trace(
  { name: 'process-payment' },
  async (trace) => {
    const validateSpan = trace.startSpan('validate-card')
    await validateCard(cardDetails)
    validateSpan.end({ status: 'ok' })

    const chargeSpan = trace.startSpan('charge-stripe')
    await chargeStripe(amount)
    chargeSpan.end({ status: 'ok' })

    trace.log.info('Payment processed', { user_id: userId, attrs: { amount } })
  }
)
client.startTrace(options): TraceHandle
Create a trace and manage its lifecycle manually. Shorthand for client.tracer.startTrace().

Options

NameTypeRequiredDescription
namestringYesHuman-readable name for the trace
serviceNamestringNoOverrides client-level serviceName
environmentstringNoOverrides client-level environment
metadataRecord<string, any>NoArbitrary metadata attached to the trace

Example

const trace = client.startTrace({
  name: 'api.handle-request',
  metadata: { route: '/api/orders', method: 'POST' },
})

// ... do work ...

await trace.end({ status: 'ok' })
TraceHandle
Returned by startTrace(). Controls spans and flushes the trace.

Properties

PropertyTypeDescription
idstringUnique trace ID
namestringTrace name
isEndedbooleanWhether trace.end() has been called
log.debug / .info / .warn / .error(msg, ctx?) => voidEmit a log correlated to this trace ID

trace.startSpan(name, options?): SpanHandle

Creates a top-level span. The first span becomes the root span.

const span = trace.startSpan('db.query', {
  metadata: { table: 'users', operation: 'SELECT' }
})
await runQuery()
span.end({ status: 'ok' })

trace.end(options?): Promise<void>

Closes the trace, auto-closes any unclosed spans, and flushes all data in one HTTP request. Calling end() more than once is a no-op.

await trace.end({ status: 'ok' })
// or on failure:
await trace.end({ status: 'error' })

trace.wrap(spanName, fn): Promise<T>

Convenience wrapper — starts a span, runs fn(span), then ends the span and trace automatically. The trace is always flushed even if fn throws.

const result = await trace.wrap('process-order', async (span) => {
  const subSpan = span.startSpan('validate-inventory')
  await checkInventory(items)
  subSpan.end({ status: 'ok' })
  return { orderId: '123' }
})
// trace is automatically ended here
SpanHandle
Returned by trace.startSpan() or span.startSpan(). Supports nesting.

span.startSpan(name, options?): SpanHandle

Creates a child span. parent_span_id is set automatically.

const dbSpan = trace.startSpan('db.transaction')

const insertSpan = dbSpan.startSpan('insert-order')
await insertOrder(data)
insertSpan.end({ status: 'ok' })

const insertItemsSpan = dbSpan.startSpan('insert-order-items')
await insertOrderItems(items)
insertItemsSpan.end({ status: 'ok' })

dbSpan.end({ status: 'ok' })

span.end(options?)

Records end time and status locally. Data is only sent to the network when trace.end() is called.

OptionTypeDescription
statusstring'ok' | 'error' | any string
metadataRecord<string, any>Merged with span metadata
tokensnumberAI token count (for AI spans)
costnumberAI cost in USD (for AI spans)

span.wrap(fn): Promise<T>

Runs fn() and automatically ends the span. Sets status='error' if fn throws.

const result = await span.wrap(async () => {
  return await db.query('SELECT * FROM users WHERE id = $1', [userId])
})
AI Span Tracking
Mark spans as AI calls to track model usage, token counts, and cost
const trace = client.startTrace({ name: 'generate-summary' })

const llmSpan = trace.startSpan('openai.chat', {
  isAI: true,
  model: 'gpt-4o',
  metadata: { prompt_tokens_estimated: 800 },
})

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: prompt }],
})

llmSpan.end({
  status: 'ok',
  tokens: response.usage?.total_tokens,
  cost: calculateCost(response.usage),
  metadata: { finish_reason: response.choices[0].finish_reason },
})

trace.log.info('Summary generated', {
  attrs: { model: 'gpt-4o', tokens: response.usage?.total_tokens }
})

await trace.end({ status: 'ok' })
Quick Logging (client.log.*)
For one-off logs not tied to a trace, use the shorthand log helpers directly on the client.
// Simple logs — no trace context required
client.log.info('Server started', { attrs: { port: 3000 } })
client.log.warn('Rate limit approaching', { user_id: 'user-123' })
client.log.error('Unexpected error', { route: '/api/checkout' })

// With analytics type for tracking events
client.log.info('User signed up', {
  type: 'analytics',
  event_type: 'signup',
  user_id: newUser.id,
})

These are buffered and flushed automatically. For logs correlated to a trace, use trace.log.* instead.

Next Steps