safe at scale and do we audit that sa ...
Capability: How good are open-source models compared to GPT-4/5? They're already there — or nearly so — in many ways. Over the past two years, open-source models have progressed incredibly. Meta's LLaMA 3, Mistral's Mixtral, Cohere's Command R+, and Microsoft's Phi-3 are some models that have shownRead more
Capability: How good are open-source models compared to GPT-4/5?
They’re already there — or nearly so — in many ways.
Over the past two years, open-source models have progressed incredibly. Meta’s LLaMA 3, Mistral’s Mixtral, Cohere’s Command R+, and Microsoft’s Phi-3 are some models that have shown that smaller or open-weight models can catch up or get very close to GPT-4 levels on several benchmarks, especially in some areas such as reasoning, retrieval-augmented generation (RAG), or coding.
Models are becoming:
- Smaller and more efficient
- Trained with better data curation
- Tuned on open instruction datasets
- Can be customized by organizations or companies for particular use cases
The open world is rapidly closing the gap on research published (or spilled) by big labs. The gap that previously existed between open and closed models was 2–3 years; now it’s down to maybe 6–12 months, and in some tasks, it’s nearly even.
However, when it comes to truly frontier models — like GPT-4, GPT-4o, Gemini 1.5, or Claude 3.5 — there’s still a noticeable lead in:
- Multimodal integration (text, vision, audio, video)
- Robustness under pressure
- Scalability and latency at large scale
- Zero-shot reasoning across diverse domains
So yes, open-source is closing in — but there’s still an infrastructure and quality gap at the top. It’s not simply model weights, but tooling, infrastructure, evaluation, and guardrails.
Safety: Are open models as safe as closed models?
That is a much harder one.
Open-source models are open — you know what you’re dealing with, you can audit the weights, you can know the training data (in theory). That’s a gigantic safety and trust benefit.
But there’s a downside:
- The moment you open-sourced a good model, anyone can use it — for good or ill.
- With closed models, you can’t prevent misuse (e.g., making malware, disinformation, or violent content).
- Fine-tuning or prompt injection can make even a very “safe” model act out.
Private labs like OpenAI, Anthropic, and Google build in:
- Robust content filters
- Alignment layers
- Red-teaming protocols
- Abuse detection
And centralized control — which, for better or worse, allows them to enforce safety policies and ban bad actors
This centralization can feel like “gatekeeping,” but it’s also what enables strong guardrails — which are harder to maintain in the open-source world without central infrastructure.
That said, there are a few open-source projects at the forefront of community-driven safety tools, including:
- Reinforcement learning from human feedback (RLHF)
- Constitutional AI
- Model cards and audits
- Open evaluation platforms (e.g., HELM, Arena, LMSYS)
So while open-source safety is behind the curve, it’s increasing fast — and more cooperatively.
The Bigger Picture: Why this question matters
Fundamentally, this question is really about who gets to determine the future of AI.
- If only a few dominant players gain access to state-of-the-art AI, there’s risk of concentrated power, opaque decision-making, and economic distortion.
- But if it’s all open-source, there’s the risk of untrammeled abuse, mass-scale disinformation, or even destabilization.
The most promising future likely exists in hybrid solutions:
- Open-weight models with community safety layers
- Closed models with open APIs
- Policy frameworks that encourage responsibility, not regulation
- Cooperation between labs, governments, and civil society
TL;DR — Final Thoughts
- Yes, open-source AI models are rapidly closing the capability gap — and will soon match, and then surpass, closed models in many areas.
- But safety is more complicated. Closed systems still have more control mechanisms intact, although open-source is advancing rapidly in that area, too.
- The biggest challenge is how to build a world where AI is possible, accessible, and secure — without putting that capability in the hands of a few.
What Is "Safe AI at Scale" Even? AI "safety" isn't one thing — it's a moving target made up of many overlapping concerns. In general, we can break it down to three layers: 1. Technical Safety Making sure the AI: Doesn't generate harmful or false content Doesn't hallucinate, spread misinformation, orRead more
What Is “Safe AI at Scale” Even?
AI “safety” isn’t one thing — it’s a moving target made up of many overlapping concerns. In general, we can break it down to three layers:
1. Technical Safety
Making sure the AI:
2. Social / Ethical Safety
Making sure the AI:
3. Systemic / Governance-Level Safety
Guaranteeing:
So when we ask, “Is it safe?”, we’re really asking:
Can something so versatile, strong, and enigmatic be controllable, just, and predictable — even when it’s everywhere?
Why Safety Is So Hard at Scale
Here’s why:
1. The AI is a black box
Current-day AI models (specifically large language models) are distinct from traditional software. You can’t see precisely how they “make a decision.” Their internal workings are of high dimensionality and largely incomprehensible. Therefore, even well-intentioned programmers can’t predict as much as they’d like about what is happening when the model is pushed to its extremes.
2. The world is unpredictable
No one can conceivably foresee every use (abuse) of an AI model. Criminals are creative. So are children, activists, advertisers, and pranksters. As usage expands, so does the array of edge cases — and many of them are not innocuous.
3. Cultural values aren’t universal
What’s “safe” in one culture can be offensive or even dangerous in another. A politically censoring AI based in the U.S., for example, might be deemed biased elsewhere in the world, or one trying to be inclusive in the West might be at odds with prevailing norms elsewhere. There is no single definition of “aligned values” globally.
4. Incentives aren’t always aligned
Many companies are racing to produce better-performance models earlier. Pressure to cut corners, beat the safety clock, or hide faults from scrutiny leads to mistakes. When secrecy and competition are present, safety suffers.
How Do We Audit AI for Safety?
This is the meat of your question — not just “is it safe,” but “how can we be certain?
These are the main techniques being used or under development to audit AI models for safety:
1. Red Teaming
Disadvantages:
Can’t test everything.
2. Automated Evaluations
Limitations:
3. Human Preference Feedback
Constraints:
4. Transparency Reports & Model Cards
Limitations:
5. Third-Party Audits
Limitations:
6. “Constitutional” or Rule-Based AI
Limitations:
What Would “Safe AI at Scale” Actually Look Like?
If we’re being a little optimistic — but also pragmatic — here’s what an actually safe, at-scale AI system might entail:
But. Will It Ever Be Fully Safe?
No tech is ever 100% safe. Not cars, not pharmaceuticals, not the web. And neither is AI.
But this is what’s different: AI isn’t a tool — it’s a general-purpose cognitive machine that works with humans, society, and knowledge at scale. That makes it exponentially more powerful — and exponentially more difficult to control.
So no, we can’t make it “perfectly safe.
But we can make it quantifiably safer, more transparent, and more accountable — if we tackle safety not as a one-time checkbox but as a continuous social contract among developers, users, governments, and communities.
Final Thoughts (Human to Human)
You’re not the only one if you feel uneasy about AI growing this fast. The scale, speed, and ambiguity of it all is head-spinning — especially because most of us never voted on its deployment.
But asking, “Can it be safe?” is the first step to making it safer.
Not perfect. Not harmless on all counts. But more regulated, more humane, and more responsive to true human needs.
And that’s not a technical project. That is a human one.
See less