All AI and ML capabilities are standard library — no pip install, no package.json, no third-party dependencies. Import the relevant bundle and go. See the API reference for full class documentation.
Contents
Authentication
Cloud APIs require API keys. Store them in plain text files in your working directory — the libraries read them automatically.
OpenAI
Create openai_api_key.dat containing your key. Read it at startup:
use API.OpenAI, System.IO.Filesystem;
token := FileReader->ReadFile("openai_api_key.dat")->Trim();
Gemini
Create gemini_api_key.dat. Use the built-in helper:
use API.Google.Gemini;
token := EndPoint->GetApiKey(); # reads gemini_api_key.dat
Ollama
No key needed. Start the Ollama daemon with ollama serve, then pull a model:
ollama pull llama3.2
OpenAI
Compile flag: -lib net,net_server,json,cipher,misc,openai
Chat & Text
use API.OpenAI, System.IO.Filesystem;
class ChatExample {
function : Main(args : String[]) ~ Nil {
token := FileReader->ReadFile("openai_api_key.dat")->Trim();
# single-turn
response := Response->Respond("gpt-4o-mini",
Pair->New("user", "Explain JIT compilation in one sentence.")<String, String>,
token);
response->GetText()->PrintLine();
# multi-turn conversation
messages := Vector->New()<Pair<String, String>>;
messages->AddBack(Pair->New("system", "You are a concise coding assistant.")<String, String>);
messages->AddBack(Pair->New("user", "What is a closure?")<String, String>);
messages->AddBack(Pair->New("assistant", "A closure captures its enclosing scope.")<String, String>);
messages->AddBack(Pair->New("user", "Show me one in Objeck.")<String, String>);
response := Response->Respond("gpt-4o-mini", messages, token);
response->GetText()->PrintLine();
}
}
> obc -src chat.obs -lib net,net_server,json,cipher,misc,openai > obr chat
Vision
use API.OpenAI, System.IO.Filesystem;
bytes := FileReader->ReadBinaryFile("photo.jpg");
image := ImageQuery->New("What is in this image?", bytes, ImageQuery->MimeType->JPEG);
query := Pair->New("user", image)<String, ImageQuery>;
response := Response->Respond("gpt-4o", query, token);
response->GetText()->PrintLine();
Embeddings
use API.OpenAI;
values := Embedding->Create("Objeck is a JIT-compiled language",
"text-embedding-3-small", token);
if(values <> Nil) {
"Dimensions: {$values->Size()}"->PrintLine(); # 1536
};
Moderation
Check text against OpenAI's safety categories and get per-category confidence scores:
use API.OpenAI;
result := Moderation->Check("I want to hurt someone.", token);
if(result <> Nil) {
"Flagged: {$result->IsFlagged()}"->PrintLine();
if(result->IsFlagged()) {
score := result->GetScore("violence");
"violence score: {$score}"->PrintLine();
};
};
Available categories: harassment, hate, self-harm, sexual, violence (and /threatening, /graphic, /instructions, /intent variants).
Batch Processing
Process up to 50,000 requests asynchronously at 50% of standard API cost. Results available within 24 hours.
use API.OpenAI, System.IO.Filesystem;
# 1. upload a .jsonl file of requests
data := FileReader->ReadBinaryFile("requests.jsonl");
file := File->Create("requests.jsonl", "batch", data, token);
# 2. submit the batch
job := Batch->Create(file->GetId(), "/v1/chat/completions", token);
"Batch ID: {$job->GetId()}"->PrintLine(); # status: "validating"
# 3. poll for completion (typically minutes to hours)
job := Batch->Get(job->GetId(), token);
if(job->IsComplete()) {
"Output file: {$job->GetOutputFileId()}"->PrintLine();
};
Request format (requests.jsonl, one JSON object per line):
{"custom_id":"r1","method":"POST","url":"/v1/chat/completions","body":{"model":"gpt-4o-mini","messages":[{"role":"user","content":"Hello"}]}}
Realtime Audio
Send text or audio over a WebSocket and receive both a text transcript and PCM audio response. Requires -lib sdl2 for playback.
use API.OpenAI, Game.SDL2;
# text → text + audio
response := Realtime->Respond("What time is it in Tokyo?",
"gpt-4o-realtime-preview", token);
if(response <> Nil) {
response->GetFirst()->PrintLine(); # transcript
audio := response->GetSecond();
Mixer->PlayPcm(audio->Get(), 24000,
AudioFormat->SDL_AUDIO_S16LSB, 1); # PCM 16-bit LE, 24kHz mono
};
Gemini
Compile flag: -lib net,net_server,json,cipher,misc,gemini
Chat & Vision
use API.Google.Gemini;
class GeminiChat {
function : Main(args : String[]) ~ Nil {
token := EndPoint->GetApiKey();
# single-turn text
content := Content->New("user")
->AddPart(TextPart->New("Why is the sky blue?"));
candidates := Model->GenerateContent("models/gemini-2.0-flash", content, token);
if(candidates <> Nil & <>candidates->IsEmpty()) {
candidates->First()->GetAllText()->PrintLine();
};
# image + text
bytes := System.IO.Filesystem.FileReader->ReadBinaryFile("chart.png");
content := Content->New("user")
->AddPart(TextPart->New("Summarize this chart."))
->AddPart(BinaryPart->New(bytes, "image/png"));
candidates := Model->GenerateContent("models/gemini-2.0-flash", content, token);
if(candidates <> Nil & <>candidates->IsEmpty()) {
candidates->First()->GetAllText()->PrintLine();
};
# multi-turn chat with system instruction
chat := Chat->New("models/gemini-2.0-flash", token);
chat->SetSystemInstruction(
Content->New("system")->AddPart(TextPart->New("You are a coding assistant.")));
r1 := chat->SendPart(TextPart->New("What is a closure?"), "user");
r1->GetAllText()->PrintLine();
r2 := chat->SendPart(TextPart->New("Show me one in Objeck."), "user");
r2->GetAllText()->PrintLine();
}
}
Search Grounding
Anchor model responses in live Google Search results for up-to-date answers:
use API.Google.Gemini;
content := Content->New("user")
->AddPart(TextPart->New("What major AI models were released this month?"));
candidates := Model->GenerateContentWithGrounding(
"models/gemini-2.0-flash", content, token);
if(candidates <> Nil & <>candidates->IsEmpty()) {
candidates->First()->GetAllText()->PrintLine();
};
Files API
Upload files once and reference them across multiple requests without re-uploading:
use API.Google.Gemini, System.IO.Filesystem;
# upload
data := FileReader->ReadBinaryFile("report.pdf");
file := FileManager->Upload("Q1 Report", data, "application/pdf", token);
if(file <> Nil & file->IsActive()) {
"URI: {$file->GetUri()}"->PrintLine();
};
# list active files
files := FileManager->List(token);
each(f in files) {
"{$f->GetName()}: {$f->GetState()}"->PrintLine();
};
# delete
FileManager->Delete("files/abc123", token);
Context Caching
Cache large, reused content server-side to avoid re-tokenization costs on every request:
use API.Google.Gemini, System.IO.Filesystem;
large_context := FileReader->ReadFile("legal_document.txt");
content := Content->New("user")->AddPart(TextPart->New(large_context));
# cache for 5 minutes (300 seconds)
item := CachedContent->Create(
"models/gemini-1.5-pro-001", content, 300, "legal-doc-cache", token);
if(item <> Nil) {
"Tokens cached: {$item->GetTokenCount()}"->PrintLine();
"Expires: {$item->GetExpireTime()}"->PrintLine();
};
# clean up when done
CachedContent->Delete("cachedContents/abc123", token);
Embeddings
use API.Google.Gemini;
# single embedding (768 dimensions)
content := Content->New("user")->AddPart(TextPart->New("machine learning"));
values := Model->EmbedContent(content, token);
"Dimensions: {$values->Size()}"->PrintLine();
# batch — multiple texts in one round-trip
texts := Vector->New()<String>;
texts->AddBack("Objeck is JIT-compiled");
texts->AddBack("Python uses an interpreter");
texts->AddBack("Rust is memory-safe");
embeddings := Model->BatchEmbedContent("models/text-embedding-004", texts, token);
each(i : embeddings) {
" [{$i}] dim={$embeddings->Get(i)->Size()}"->PrintLine();
};
Ollama (Local Models)
Run open-source models locally. No API key, no data leaves your machine. Install Ollama, then pull a model:
ollama pull llama3.2
Compile flag: -lib net,json,cipher,misc,ollama
use API.Ollama;
class LocalChat {
function : Main(args : String[]) ~ Nil {
# one-shot generation
Completion->Generate("llama3.2", "What is 2 + 2?")->PrintLine();
# with options
opts := Options->New()->SetTemperature(0.2);
Completion->Generate("llama3.2", "List 3 capitals of Europe.", opts)->PrintLine();
# multi-turn chat — context is maintained automatically
chat := Chat->New("llama3.2");
r1 := chat->Send("My name is Alice.");
r2 := chat->Send("What is my name?");
r2->PrintLine(); # "Your name is Alice."
# vision (multimodal models)
image := System.IO.Filesystem.File->New("photo.jpg");
Completion->Generate("llava", "Describe this image.", image)->PrintLine();
# local embeddings (no API key required)
values := Model->Embeddings("nomic-embed-text", "machine learning");
"Dimensions: {$values->Size()}"->PrintLine();
}
}
> obc -src local_chat.obs -lib net,json,cipher,misc,ollama > obr local_chat
ONNX Local Inference
Run ML models locally using the ONNX Runtime. Supports DirectML (Windows), CUDA (Linux), and CoreML (macOS). No GPU required for CPU inference. Models are loaded from local files — no internet connection needed at runtime.
Compile flag: -lib json,cipher,opencv,onnx
Face Recognition
Uses InsightFace buffalo_l: SCRFD 10G-KPS face detector + ArcFace R50 512-dim embeddings.
Download models:
curl -L -o buffalo_l.zip https://github.com/deepinsight/insightface/releases/download/v0.7/buffalo_l.zip unzip buffalo_l.zip # det_10g.onnx and w600k_r50.onnx
use API.Onnx, System.IO.Filesystem;
session := FaceSession->New("det_10g.onnx", "w600k_r50.onnx");
img1 := FileReader->ReadBinaryFile("person_a.jpg");
img2 := FileReader->ReadBinaryFile("person_b.jpg");
r1 := session->Recognize(img1, 0.5);
r2 := session->Recognize(img2, 0.5);
if(r1->GetSize() > 0 & r2->GetSize() > 0) {
emb1 := r1->GetResults()[0]->GetEmbedding();
emb2 := r2->GetResults()[0]->GetEmbedding();
sim := FaceSession->Compare(emb1, emb2);
"Similarity: {$sim}"->PrintLine();
"Same person: {$(sim > 0.35)}"->PrintLine();
};
session->Close();
Object Detection (YOLO)
Download a YOLOv11 model: yolo export model=yolo11n.pt format=onnx (requires ultralytics).
use API.Onnx, System.IO.Filesystem;
labels := String->New[80]; # COCO 80-class labels
labels[0] := "person"; labels[1] := "bicycle"; # ... fill all 80
session := YoloSession->New("yolov11n.onnx");
img := FileReader->ReadBinaryFile("street.jpg");
result := session->Inference(img, 640, 640, 0.5, labels);
each(d in result->GetClassifications()) {
"{$d->GetLabel()}: {$d->GetConfidence()}"->PrintLine();
};
session->Close();
Phi-3 (Local SLM)
Run Microsoft's Phi-3 Mini locally for text generation — no internet connection required at inference time.
Download model (~2 GB):
huggingface-cli download microsoft/Phi-3-mini-4k-instruct-onnx \ --include "directml/directml-int4-awq-block-128/*"
use API.Onnx, System.IO.Filesystem;
tokenizer := Phi3Tokenizer->New("tokenizer.json");
session := Phi3Session->New("phi3-mini-4k-instruct.onnx");
prompt := "<|user|>\nWhat is the capital of France?<|end|>\n<|assistant|>\n";
token_ids := tokenizer->Encode(prompt);
eos := Int->New[1]; eos[0] := 32007;
result := session->Generate(token_ids, 200, 0.7, eos);
tokenizer->Decode(result->GetTokenIds())->PrintLine();
session->Close();
Computer Vision (OpenCV)
Compile flag: -lib json,cipher,opencv
use API.OpenCV;
class VisionExample {
function : Main(args : String[]) ~ Nil {
# load, process, save
image := Image->New("photo.jpg");
gray := image->ToGray();
blurred := gray->GaussianBlur(5, 5);
edges := blurred->Canny(50, 150);
edges->Save("edges.jpg");
# Haar cascade face detection
detector := FaceDetector->New("haarcascade_frontalface_default.xml");
faces := detector->Detect(image);
"Faces detected: {$faces->Size()}"->PrintLine();
# resize and color conversion
resized := image->Resize(320, 240);
hsv := resized->ToHsv();
hsv->Save("hsv_output.jpg");
}
}
Natural Language Processing
Built-in NLP primitives — no model download required. Pure Objeck bytecode.
Compile flag: -lib gen_collect,nlp
use API.ML.NLP;
class NLPExample {
function : Main(args : String[]) ~ Nil {
# sentiment analysis
text := "This product is absolutely wonderful!";
SentimentAnalyzer->Classify(text)->PrintLine(); # "positive"
# TF-IDF vectorization
docs := String->New[3];
docs[0] := "cats are pets";
docs[1] := "dogs are pets";
docs[2] := "birds can fly";
tfidf := TF_IDF->New();
tfidf->Fit(docs);
vector := tfidf->Transform("cats and dogs");
each(v in vector) { v->PrintLine(); };
# cosine similarity
sim := TextSimilarity->Cosine("hello world", "hello there");
"Similarity: {$sim}"->PrintLine();
# tokenization
tokens := Tokenizer->Tokenize("The quick brown fox");
each(t in tokens) { t->PrintLine(); };
}
}
> obc -src nlp_example.obs -lib gen_collect,nlp > obr nlp_example
Quick Reference
| Capability | Class | Library flag | Key required |
|---|---|---|---|
| Chat / text | Response | openai | OpenAI |
| Vision (image input) | Response + ImageQuery | openai | OpenAI |
| Realtime audio | Realtime | openai,sdl2 | OpenAI |
| Image generation | Image | openai | OpenAI |
| Text embeddings | Embedding | openai | OpenAI |
| Content moderation | Moderation | openai | OpenAI |
| Batch processing | Batch | openai | OpenAI |
| Gemini chat/vision | Model::GenerateContent | gemini | Gemini |
| Search grounding | Model::GenerateContentWithGrounding | gemini | Gemini |
| File upload | FileManager | gemini | Gemini |
| Context caching | CachedContent | gemini | Gemini |
| Gemini embeddings | Model::EmbedContent | gemini | Gemini |
| Batch embeddings | Model::BatchEmbedContent | gemini | Gemini |
| Local chat | Completion / Chat | ollama | None |
| Local embeddings | Model::Embeddings | ollama | None |
| Face recognition | FaceSession | onnx | None |
| Object detection | YoloSession | onnx | None |
| Local SLM | Phi3Session | onnx | None |
| Image classification | ResNetSession | onnx | None |
| Computer vision | Image, FaceDetector | opencv | None |
| Sentiment / TF-IDF | SentimentAnalyzer, TF_IDF | nlp | None |
Model Recommendations
| Task | OpenAI | Gemini | Ollama (local) |
|---|---|---|---|
| General chat | gpt-4o-mini | gemini-2.0-flash | llama3.2 |
| Reasoning | o3-mini | gemini-2.5-pro | qwen2.5 |
| Vision | gpt-4o | gemini-2.0-flash | llava |
| Realtime audio | gpt-4o-realtime-preview | — | — |
| Embeddings | text-embedding-3-small | text-embedding-004 | nomic-embed-text |
| Fast + cheap | gpt-4o-mini | gemini-2.5-flash | phi3 |
Full working examples are in
programs/frameworks/:
openai/ openai_chat.obs openai_vision.obs openai_moderation.obs
openai_batch.obs openai_tune.obs openai_responses.obs
gemini/ gemini_image.obs gemini_audio.obs gemini_files.obs
gemini_cache.obs gemini_ground.obs gemini_embed.obs
ollama/ ollama_chat.obs ollama_vision.obs
opencv_onnx/ face_recog.obs yolo_detect.obs phi3_chat.obs