The Basics
Let's start with a simple program. Sigil looks familiar if you've used Rust, TypeScript, or Kotlin:
// Variables are immutable by default
let name = "Sigil"
let version = 0.2
// Use 'mut' for mutable variables
let mut counter = 0
counter = counter + 1
// Functions
fn greet(name: Str) -> Str {
"Hello, {name}!" // String interpolation
}
// Call functions
let message = greet("World")
println(message) // Hello, World!
Try It Yourself
Open in Playground →Edit the code and see the results immediately.
Types
Sigil has a strong static type system with inference:
// Primitives
let integer: i64 = 42
let float: f64 = 3.14
let boolean: bool = true
let character: char = '⛤'
let text: Str = "Hello"
// Collections
let array: [i64; 3] = [1, 2, 3]
let tuple: (i64, Str) = (42, "answer")
let map = { "key": "value" }
// Type inference (types are inferred)
let inferred = 42 // Inferred as i64
Evidentiality
Here's what makes Sigil unique: evidentiality markers track where your data comes from. This prevents entire classes of bugs involving untrusted data.
// ! (bang) = Known: computed locally, verified
let sum! = 1 + 1 // We computed this ourselves
// ? (question) = Uncertain: might be absent
let user? = database.find_user(id) // Might not exist
// ~ (tilde) = Reported: external/untrusted source
let data~ = http.get("https://api.example.com") // Don't trust this!
// ‽ (interrobang) = Paradox: explicit trust boundary
let unsafe_ptr‽ = unsafe { raw_pointer } // You're taking responsibility
Evidence Propagation
The key insight: evidence propagates pessimistically. When you combine data, the result inherits the worst evidence level:
let local! = 100 // Known
let remote~ = api.get_price() // Reported
// The result is Reported~ because remote is untrusted
let total~ = local + remote // Known + Reported = Reported
// To use this data safely, you must validate it
let validated? = total |validate?{
_ > 0 && _ < 10000 // Range check
}
// Fully verify to get Known! status
let verified! = validated |validate!{
verify_signature(remote_sig)
}
Morpheme Operators
Sigil uses Greek letters as morpheme operators for data transformation. Think of them as pipeline operators on steroids:
let numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
// Build a processing pipeline
let result = numbers
|φ{_ % 2 == 0} // φ (phi) = filter: keep evens [2,4,6,8,10]
|τ{_ ** 2} // τ (tau) = transform: square [4,16,36,64,100]
|σ // σ (sigma) = sort (already sorted)
|Σ // Σ (capital sigma) = sum: 220
Common Morphemes
// Aggregation
[1,2,3]|Σ // Sum → 6
[1,2,3]|Π // Product → 6
[1,5,3]|μ // Mean → 3
// Access
[1,2,3]|α // First → 1
[1,2,3]|ω // Last → 3
[1,2,3]|λ // Length → 3
// Transformation
data|τ{_.upper()} // Transform each element
data|φ{_.active} // Filter by predicate
data|ρ{a,b => a+b} // Reduce/fold
// Sorting
data|σ // Sort ascending
data|σ·desc // Sort descending
data|σ·by{.name} // Sort by field
Structs & Pattern Matching
// Define a struct
struct User {
name: Str,
email: Str,
age: i64,
active: bool,
}
// Implement methods
impl User {
fn new(name: Str, email: Str) -> User {
User { name, email, age: 0, active: true }
}
fn greet(self) -> Str {
"Hello, {self.name}!"
}
}
// Enums
enum Status {
Active,
Inactive,
Pending(Str), // With data
}
// Pattern matching
fn describe(status: Status) -> Str {
match status {
Status::Active => "User is active",
Status::Inactive => "User is inactive",
Status::Pending(reason) => "Pending: {reason}",
}
}
Building an Agent
Now for the fun part: let's build an AI agent using Sigil's agent infrastructure:
// Define a research agent with memory and planning
daemon ResearchAgent {
// Configure memory (Engram layer)
memory: Engram {
instant: 8192, // Context window tokens
episodic: true, // Remember experiences
semantic: true, // Knowledge graph
procedural: true, // Learn skills
}
// Define tools the agent can use
tools: [
web_search,
read_file,
write_file,
run_code,
]
// Planning capability (Omen layer)
fn plan(self, goal: Goal) -> [Task] {
self.omen.decompose(goal)
|φ{.feasible(self.tools)}
|σ·by{.priority}
}
// Execute with explainability (Oracle layer)
fn execute(self, task: Task) -> Result! {
let trace = self.oracle.begin_trace()
let result~ = self.run_tool(task.tool, task.args)
let validated = result |validate?{ task.validator }
trace.record_decision("Validated result: {validated}")
validated |validate!{ self.verify }
}
}
// Spawn and use the agent
let agent = ResearchAgent::spawn()
let tasks = agent.plan(Goal::new("Find the latest Rust release"))
for task in tasks {
let result = agent.execute(task)
println("Completed: {result}")
}