a major investment deal involving Mic ...
How Multimodal Models Will Change Everyday Computing Over the last decade, we have seen technology get smaller, quicker, and more intuitive. But multimodal AI-computer systems that grasp text, images, audio, video, and actions together-is more than the next update; it's the leap that will change comRead more
How Multimodal Models Will Change Everyday Computing
Over the last decade, we have seen technology get smaller, quicker, and more intuitive. But multimodal AI-computer systems that grasp text, images, audio, video, and actions together-is more than the next update; it’s the leap that will change computers from tools with which we operate to partners with whom we will collaborate.
Today, you tell a computer what to do.
Tomorrow, you will show it, tell it, demonstrate it or even let it observe – and it will understand.
Let’s see how this changes everyday life.
1. Computers will finally understand context like humans do.
At the moment, your laptop or phone only understands typed or spoken commands. It doesn’t “see” your screen or “hear” the environment in a meaningful way.
Multimodal AI changes that.
Imagine saying:
- “Fix this error” while pointing your camera at a screen.
Error The AI will read the error message, understand your voice tone, analyze the background noise, and reply:
- “This is a Java null pointer issue. Let me rewrite the method so it handles the edge case.”
- This is the first time computers gain real sensory understanding.
- They won’t simply process information, but actively perceive.
2. Software will become invisible tasks will flow through conversation + demonstration
Today you switch between apps: Google, WhatsApp, Excel, VS Code, Camera…
In the multimodal world, you’ll be interacting with tasks, not apps.
You might say:
- “Generate a summary of this video call and send it to my team.
- “Crop me out from this photo and put me on a white background.”
- “Watch this YouTube tutorial and create a script based on it.”
- No need to open editing tools or switch windows.
The AI becomes the layer that controls your tools for you-sort of like having a personal operating system inside your operating system.
3. The New Generation of Personal Assistants: Thoughtfully Observant rather than Just Reactive
Siri and Alexa feel robotic because they are single-modal; they understand speech alone.
Future assistants will:
- See what you’re working on
- Hear your environment
- Read what’s on your screen
- Watch your workflow
- Predict what you want next
Imagine doing night shifts, and your assistant politely says:
- “You’ve been coding for 3 hours. Want me to draft tomorrow’s meeting notes while you finish this function?
- It will feel like a real teammate organizing, reminding, optimizing, and learning your patterns.
4. Workflows will become faster, more natural and less technical.
Multimodal AI will turn the most complicated tasks into a single request.
Examples:
- Documents
“Convert this handwritten page into a formatted Word doc and highlight the action points.
- Design
“Here’s a wireframe; make it into an attractive UI mockup with three color themes.
- Learning
“Watch this physics video and give me a summary for beginners with examples.
- Creative
“Use my voice and this melody to create a clean studio-level version.”
We will move from doing the task to describing the result.
This reduces the technical skill barrier for everyone.
5. Education and training will become more interactive and personalized.
Instead of just reading text or watching a video, a multimodal tutor can:
- Grade assignments by reading handwriting
- Explain concepts while looking at what the student is solving.
- Watch students practice skills-music, sports, drawing-and give feedback in real-time
- Analyze tone, expressions, and understanding levels
- Learning develops into a dynamic, two-way conversation rather than a one-way lecture.
6. Healthcare, Fitness, and Lifestyle Will Benefit Immensely
- Imagine this:
- It watches your form while you work out and corrects it.
- It listens to your cough and analyses it.
- It studies your plate of food and calculates nutrition.
- It reads your expression and detects stress or burnout.
- It processes diagnostic medical images or videos.
- This is proactive, everyday health support-not just diagnostics.
7. The Creative Industries Will Explode With New Possibilities
- AI will not replace creativity; it’ll supercharge it.
- Film editors can tell: “Trim the awkward pauses from this interview.”
- Musicians can hum a tune and generate a full composition.
- Users can upload a video scene and request AI to write dialogues.
- Designers can turn sketches, voice notes, and references into full visuals.
Being creative then becomes more about imagination and less about mastering tools.
8. Computing Will Feel More Human, Less Mechanical
The most profound change?
We won’t have to “learn computers” anymore; rather, computers will learn us.
We’ll be communicating with machines using:
- Voice
- Gestures
- Screenshots
- Photos
- Real-world objects
- Videos
- Physical context
That’s precisely how human beings communicate with one another.
Computing becomes intuitive almost invisible.
Overview: Multimodal AI makes the computer an intelligent companion.
They shall see, listen, read, and make sense of the world as we do. They will help us at work, home, school, and in creative fields. They will make digital tasks natural and human-friendly. They will reduce the need for complex software skills. They will shift computing from “operating apps” to “achieving outcomes.” The next wave of AI is not about bigger models; it’s about smarter interaction.
See less
What we do know Microsoft and Nvidia announced an investment deal in Anthropic totalling up to US $15 billion. Specifically, Nvidia committed up to US $10 billion, and Microsoft up to US $5 billion. Some reports tied this investment to a valuation estimate of around US $350 billion for Anthropic. FRead more
What we do know
Microsoft and Nvidia announced an investment deal in Anthropic totalling up to US $15 billion. Specifically, Nvidia committed up to US $10 billion, and Microsoft up to US $5 billion.
Some reports tied this investment to a valuation estimate of around US $350 billion for Anthropic. For example: “Sources told CNBC that the fresh investment valued Anthropic at US$350 billion, making it one of the world’s most valuable companies.”
Other, earlier credible data show that in September 2025, after a US$13 billion fundraise, Anthropic’s valuation was around US$183 billion.
Did it reach US$350 billion right now?
Not definitively. The situation is nuanced:
The US$350 billion figure is reported by some sources, but appears to be an estimate or preliminary valuation discussion, rather than a publicly confirmed post-money valuation.
The more concretely verified figure is US$183 billion (post-money) following the US$13 billion raise in September 2025. That is official.
Because high valuations for private companies can vary wildly (depending on assumptions about future growth, investor commitments, options, etc.), the “US$350 billion” mark may reflect a valuation expectation or potential cap rather than the formally stated result of the latest transaction.
Why the discrepancy?
Several factors explain why one figure is widely cited (US$350 billion) and another (US$183 billion) is more concretely documented:
Timing of valuation announcements: Valuations can shift rapidly in the AI-startup boom. The US$183 billion figure corresponds with the September 2025 round, which is the most recent clearly disclosed. The US$350 billion number may anticipate a future round or reflect investor commitments at conditional levels.
Nature of the investment deal: The Microsoft/Nvidia deal (US $15 billion) includes up to certain amounts (“up to US $10 billion from Nvidia”, “up to US $5 billion from Microsoft”). “Up to” indicates contingent parts, not necessarily all deployed yet.
Valuation calculations differ: Some valuations include not just equity but also commitments to purchase infrastructure, cloud credits, chip purchases, etc. For example, Anthropic reportedly committed to purchase up to US $30 billion of Microsoft’s cloud capacity as part of the deal.
Media reports vs company-disclosed numbers: Media outlets often publish “sources say” valuations; companies may not yet confirm them. So the US$350 billion number may be circulating before formal confirmation.
My best summary answer
In plain terms: While there are reports that Anthropic is valued at around US $350 billion in connection with the Microsoft/Nvidia investment deal, the only firm, publicly disclosed firm valuation as of now is around US $183 billion (after the US $13 billion funding round). Therefore, it is not yet definitively confirmed that the valuation “reached” US$350 billion in a fully closed deal.
Why this matters
-
-
-
See lessFor you (and for the industry): If this valuation is accurate or soon to be, it signals how intensely the AI race is priced. Startups are being valued not on current earnings but on massive future expectations.
It raises questions about sustainability: When valuations jump so fast (and to such large numbers), it makes sense to ask: Are earnings keeping up? Are business models proven? Are these valuations realistic or inflated by hype?
The deal with Microsoft and Nvidia has deeper implications: It’s not just about money, it’s about infrastructure (cloud, chips), long-term partnerships, and strategic control in the AI stack.