Summary: Most stacks โlearnโ by fine-tuning weights and redeploying โ powerful, but opaque. SI-Core already produces *structured evidence* (jump logs, ethics traces, effect ledgers, goal vectors, rollback traces), so learning can be *structural* instead:
*Upgrade policies, compensators, SIL code, and goal structures โ using runtime evidence.*
> Learning isnโt a model tweak. > *Itโs upgrading the structures that shape behavior.*
---
Why It Matters: โข Makes improvement *localized and explainable* (what changed, where, and why) โข Keeps โself-improvementโ *governable* (versioned deltas + review + CI/CD) โข Turns incidents/metric drift into *actionable patches*, not postmortem PDFs โข Scales to real ops: ethics policies, rollback plans, semantic compression, goal estimators
---
Whatโs Inside: โข What โlearningโ means in SI-Core (and what changes vs. classic ML) โข The *Pattern-Learning-Bridge*: where it sits between runtime evidence and governed code โข Safety properties: PLB proposes *versioned deltas*, never edits production directly โข Validation pipeline: sandbox/simulation โ conformance checks โ golden diffs โ rollout
---
๐ Structured Intelligence Engineering Series A non-normative, implementable design for โlearning from failuresโ without sacrificing auditability.
reacted to MikeDoes's
post with ๐๐9 days ago
What if an AI agent could be tricked into stealing your data, just by reading a tool's description? A new paper reports it's possible.
The "Attractive Metadata Attack" paper details this stealthy new threat. To measure the real-world impact of their attack, the researchers needed a source of sensitive data for the agent to leak. We're proud that the AI4Privacy corpus was used to create the synthetic user profiles containing standardized PII for their experiments.
This is a perfect win-win. Our open-source data helped researchers Kanghua Mo, ้พๆฑไธ, Zhihao Li from Guangzhou University and The Hong Kong Polytechnic University to not just demonstrate a new attack, but also quantify its potential for harm. This data-driven evidence is what pushes the community to build better, execution-level defenses for AI agents.
๐ Check out their paper to see how easily an agent's trust in tool metadata could be exploited: https://arxiv.org/pdf/2508.02110
๐ Introducing VideoCoF: Unified Video Editing with a Temporal Reasoner (Chain-of-Frames)!
Weโre excited to introduce VideoCoF, a unified framework for instruction-based video editing that enables temporal reasoning and ~4ร video length extrapolation, trained with only 50k video pairs. ๐ฅ
๐ What makes VideoCoF different? ๐ง Chain-of-Frames reasoning , mimic human thinking process like Seeing โ Reasoning โ Editing to apply edits accurately over time without external masks, ensuring physically plausible results. ๐ Strong length generalization โ trained on 33-frame clips, yet supports multi-shot editing and long-video extrapolation (~4ร). ๐ฏ Unified fine-grained editing โ Object Removal, Addition, Swap, and Local Style Transfer, with instance-level & part-level, spatial-aware control.
โก Fast inference update ๐ H100: ~20s / video with 4-step inference, making high-quality video editing far more practical for real-world use.
๐ Introducing VideoCoF: Unified Video Editing with a Temporal Reasoner (Chain-of-Frames)!
Weโre excited to introduce VideoCoF, a unified framework for instruction-based video editing that enables temporal reasoning and ~4ร video length extrapolation, trained with only 50k video pairs. ๐ฅ
๐ What makes VideoCoF different? ๐ง Chain-of-Frames reasoning , mimic human thinking process like Seeing โ Reasoning โ Editing to apply edits accurately over time without external masks, ensuring physically plausible results. ๐ Strong length generalization โ trained on 33-frame clips, yet supports multi-shot editing and long-video extrapolation (~4ร). ๐ฏ Unified fine-grained editing โ Object Removal, Addition, Swap, and Local Style Transfer, with instance-level & part-level, spatial-aware control.
โก Fast inference update ๐ H100: ~20s / video with 4-step inference, making high-quality video editing far more practical for real-world use.
The models come in Thinking and Instruct versions and utilize a new architecture, allowing it to have ~10x faster inference than Qwen32B. ๐ Step-by-step Guide: https://docs.unsloth.ai/models/qwen3-next
The LLM by @karpathy is officially in the library, and we wrote a blog covering: how did we port the model, differences from the original, and how to run or train it.
Z-Image Turbo LoRA training with Ostris AI Toolkit + Z-Image Turbo Fun Controlnet Union + 1-click to download and install the very best Z-Image Turbo presets. In this tutorial, I will explain how to setup Z-Image Turbo model properly in your local PC with SwarmUI and download models and use them with highest quality via ready presets. Moreover, I will show to install Z-Image Turbo Fun Controlnet Union to generate amazing quality images with ControlNet preprocessors. Furthermore, I will show how to 1-click install AI Toolkit from Ostris and train Z-Image Turbo model LoRAs with highest quality configs made for every GPU like 8 GB GPUs, 12 GB GPUs, 24 GB GPUs and so on. I did a massive research to prepare these Z-Image Turbo model training configurations.
Today, we announce Mistral 3, the next generation of Mistral models. Mistral 3 includes three state-of-the-art small, dense models (14B, 8B, and 3B) and Mistral Large 3 โ our most capable model to date โ a sparse mixture-of-experts trained with 41B active and 675B total parameters.
All models are released under the Apache 2.0 license.