Tracing API
Distributed tracing lets you follow a request across services. One trace contains many spans (units of work), and logs can be correlated to any span. Everything is buffered in memory and sent in a single request when the trace ends.
How it works
Trace → a single request or operation (e.g. an HTTP request, a background job).
Span → a unit of work inside a trace (e.g. a DB query, an LLM call).
Log → emitted via trace.log.* and automatically correlated to the trace ID.
import { LightningLogsClient } from '@lightning-logs/sdk'
const client = new LightningLogsClient({
baseURL: process.env.SUPABASE_URL!,
getAuthToken: async () => process.env.LIGHTNING_LOGS_API_KEY!,
serviceName: 'api',
environment: 'production',
})
// Wrap an operation — trace and spans are flushed automatically
await client.tracer.trace(
{ name: 'process-payment' },
async (trace) => {
const validateSpan = trace.startSpan('validate-card')
await validateCard(cardDetails)
validateSpan.end({ status: 'ok' })
const chargeSpan = trace.startSpan('charge-stripe')
await chargeStripe(amount)
chargeSpan.end({ status: 'ok' })
trace.log.info('Payment processed', { user_id: userId, attrs: { amount } })
}
)client.tracer.startTrace().Options
| Name | Type | Required | Description |
|---|---|---|---|
| name | string | Yes | Human-readable name for the trace |
| serviceName | string | No | Overrides client-level serviceName |
| environment | string | No | Overrides client-level environment |
| metadata | Record<string, any> | No | Arbitrary metadata attached to the trace |
Example
const trace = client.startTrace({
name: 'api.handle-request',
metadata: { route: '/api/orders', method: 'POST' },
})
// ... do work ...
await trace.end({ status: 'ok' })Properties
| Property | Type | Description |
|---|---|---|
| id | string | Unique trace ID |
| name | string | Trace name |
| isEnded | boolean | Whether trace.end() has been called |
| log.debug / .info / .warn / .error | (msg, ctx?) => void | Emit a log correlated to this trace ID |
trace.startSpan(name, options?): SpanHandle
Creates a top-level span. The first span becomes the root span.
const span = trace.startSpan('db.query', {
metadata: { table: 'users', operation: 'SELECT' }
})
await runQuery()
span.end({ status: 'ok' })trace.end(options?): Promise<void>
Closes the trace, auto-closes any unclosed spans, and flushes all data in one HTTP request. Calling end() more than once is a no-op.
await trace.end({ status: 'ok' })
// or on failure:
await trace.end({ status: 'error' })trace.wrap(spanName, fn): Promise<T>
Convenience wrapper — starts a span, runs fn(span), then ends the span and trace automatically. The trace is always flushed even if fn throws.
const result = await trace.wrap('process-order', async (span) => {
const subSpan = span.startSpan('validate-inventory')
await checkInventory(items)
subSpan.end({ status: 'ok' })
return { orderId: '123' }
})
// trace is automatically ended herespan.startSpan(name, options?): SpanHandle
Creates a child span. parent_span_id is set automatically.
const dbSpan = trace.startSpan('db.transaction')
const insertSpan = dbSpan.startSpan('insert-order')
await insertOrder(data)
insertSpan.end({ status: 'ok' })
const insertItemsSpan = dbSpan.startSpan('insert-order-items')
await insertOrderItems(items)
insertItemsSpan.end({ status: 'ok' })
dbSpan.end({ status: 'ok' })span.end(options?)
Records end time and status locally. Data is only sent to the network when trace.end() is called.
| Option | Type | Description |
|---|---|---|
| status | string | 'ok' | 'error' | any string |
| metadata | Record<string, any> | Merged with span metadata |
| tokens | number | AI token count (for AI spans) |
| cost | number | AI cost in USD (for AI spans) |
span.wrap(fn): Promise<T>
Runs fn() and automatically ends the span. Sets status='error' if fn throws.
const result = await span.wrap(async () => {
return await db.query('SELECT * FROM users WHERE id = $1', [userId])
})const trace = client.startTrace({ name: 'generate-summary' })
const llmSpan = trace.startSpan('openai.chat', {
isAI: true,
model: 'gpt-4o',
metadata: { prompt_tokens_estimated: 800 },
})
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: prompt }],
})
llmSpan.end({
status: 'ok',
tokens: response.usage?.total_tokens,
cost: calculateCost(response.usage),
metadata: { finish_reason: response.choices[0].finish_reason },
})
trace.log.info('Summary generated', {
attrs: { model: 'gpt-4o', tokens: response.usage?.total_tokens }
})
await trace.end({ status: 'ok' })// Simple logs — no trace context required
client.log.info('Server started', { attrs: { port: 3000 } })
client.log.warn('Rate limit approaching', { user_id: 'user-123' })
client.log.error('Unexpected error', { route: '/api/checkout' })
// With analytics type for tracking events
client.log.info('User signed up', {
type: 'analytics',
event_type: 'signup',
user_id: newUser.id,
})These are buffered and flushed automatically. For logs correlated to a trace, use trace.log.* instead.