Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
1
90
231
pascalmusabyimana
pascal-maker
Follow
21world's profile picture
Gargaz's profile picture
b082f689-ebc3-413d-b1f2-27bd1e86dca7's profile picture
19 followers
·
102 following
https://pascal-maker.github.io/developedbypascalmusabyimana/
PascalMusabyim1
pascal-maker
pascal-musabyimana-573b66178
AI & ML interests
computer vision, nlp , machine learning and deeplearning
Recent Activity
reacted
to
kanaria007
's
post
with 👀
about 15 hours ago
✅ New Article: *Pattern-Learning-Bridge (PLB)* Title: 🧩 Pattern-Learning-Bridge: How SI-Core Actually Learns From Its Own Failures 🔗 https://huggingface.co/blog/kanaria007/learns-from-its-own-failures --- Summary: Most stacks “learn” by fine-tuning weights and redeploying — powerful, but opaque. SI-Core already produces *structured evidence* (jump logs, ethics traces, effect ledgers, goal vectors, rollback traces), so learning can be *structural* instead: *Upgrade policies, compensators, SIL code, and goal structures — using runtime evidence.* > Learning isn’t a model tweak. > *It’s upgrading the structures that shape behavior.* --- Why It Matters: • Makes improvement *localized and explainable* (what changed, where, and why) • Keeps “self-improvement” *governable* (versioned deltas + review + CI/CD) • Turns incidents/metric drift into *actionable patches*, not postmortem PDFs • Scales to real ops: ethics policies, rollback plans, semantic compression, goal estimators --- What’s Inside: • What “learning” means in SI-Core (and what changes vs. classic ML) • The *Pattern-Learning-Bridge*: where it sits between runtime evidence and governed code • Safety properties: PLB proposes *versioned deltas*, never edits production directly • Validation pipeline: sandbox/simulation → conformance checks → golden diffs → rollout --- 📖 Structured Intelligence Engineering Series A non-normative, implementable design for “learning from failures” without sacrificing auditability.
reacted
to
MikeDoes
's
post
with 👀
2 days ago
What if an AI agent could be tricked into stealing your data, just by reading a tool's description? A new paper reports it's possible. The "Attractive Metadata Attack" paper details this stealthy new threat. To measure the real-world impact of their attack, the researchers needed a source of sensitive data for the agent to leak. We're proud that the AI4Privacy corpus was used to create the synthetic user profiles containing standardized PII for their experiments. This is a perfect win-win. Our open-source data helped researchers Kanghua Mo, 龙昱丞, Zhihao Li from Guangzhou University and The Hong Kong Polytechnic University to not just demonstrate a new attack, but also quantify its potential for harm. This data-driven evidence is what pushes the community to build better, execution-level defenses for AI agents. 🔗 Check out their paper to see how easily an agent's trust in tool metadata could be exploited: https://arxiv.org/pdf/2508.02110 #OpenSource #DataPrivacy #LLM #Anonymization #AIsecurity #HuggingFace #Ai4Privacy #Worldslargestopensourceprivacymaskingdataset
reacted
to
MikeDoes
's
post
with 👀
2 days ago
What if an AI agent could be tricked into stealing your data, just by reading a tool's description? A new paper reports it's possible. The "Attractive Metadata Attack" paper details this stealthy new threat. To measure the real-world impact of their attack, the researchers needed a source of sensitive data for the agent to leak. We're proud that the AI4Privacy corpus was used to create the synthetic user profiles containing standardized PII for their experiments. This is a perfect win-win. Our open-source data helped researchers Kanghua Mo, 龙昱丞, Zhihao Li from Guangzhou University and The Hong Kong Polytechnic University to not just demonstrate a new attack, but also quantify its potential for harm. This data-driven evidence is what pushes the community to build better, execution-level defenses for AI agents. 🔗 Check out their paper to see how easily an agent's trust in tool metadata could be exploited: https://arxiv.org/pdf/2508.02110 #OpenSource #DataPrivacy #LLM #Anonymization #AIsecurity #HuggingFace #Ai4Privacy #Worldslargestopensourceprivacymaskingdataset
View all activity
Organizations
pascal-maker
's Spaces
7
Sort: Recently updated
pinned
Paused
My Argilla
✍
Sleeping
Agentscomparison Dashboard
🚀
Display project metrics with real-time updates
Paused
Medical VLM with SAM-2 and CheXagent
🚀
A comprehensive medical imaging analysis tool
Paused
Medical Imaging Analysis
🏆
Paused
medicalaiapp
🚀
Paused
luminus
🚀
Paused
Debugcode
🔥