AI model to use (for a given task)
1. The Simple Idea: Machines Taught to "Think" Artificial Intelligence is the design of making computers perform intelligent things — not just by following instructions, but actually learning from information and improving with time. In regular programming, humans teach computers to accomplish thingRead more
1. The Simple Idea: Machines Taught to “Think”
Artificial Intelligence is the design of making computers perform intelligent things — not just by following instructions, but actually learning from information and improving with time.
In regular programming, humans teach computers to accomplish things step by step.
In AI, computers learn to resolve things on their own by gaining expertise on patterns in information.
For example
When Siri quotes back the weather to you, it is not reading from a script. It is recognizing your voice, interpreting your question, accessing the right information, and responding in its own words — all driven by AI.
2. How AI “Learns” — The Power of Data and Algorithms
Computers are instructed with so-called machine learning —inferring catalogs of vast amounts of data so that they may learn patterns.
- Machine Learning (ML): The machine learns by example, not by rule. Display a thousand images of dogs and cats, and it may learn to tell them apart without learning to do so.
- Deep Learning: Latest generation of ML based on neural networks —stacks of algorithms imitating the way we think.
That’s how machines can now identify faces, translate text, or compose music.
3. Examples of AI in Your Daily Life
You probably interact with AI dozens of times a day — maybe without even realizing it.
- Your phone: Face ID, voice assistants, and autocorrect.
- Streaming: Netflix or Spotify recommends you like something.
- Shopping: Amazon’s “Recommended for you” page.
- Health care: AI is diagnosing diseases from X-rays faster than doctors.
- Cars: Self-driving vehicles with sensors and AI delivering split-second decisions.
AI isn’t science fiction anymore — it’s present in our reality.
4. AI types
AI isn’t one entity — there are levels:
- Narrow AI (Weak AI): Designed to perform a single task, like ChatGPT responding or Google Maps route navigation.
- General AI (Strong AI): A Hypothetical kind that would perhaps understand and reason in several fields as any common human individual, yet to be achieved.
- Superintelligent AI: Another level higher than human intelligence — still a future goal, but widely seen in the movies.
We already have Narrow AI, mostly, but it is already incredibly powerful.
5. The Human Side — Pros and Cons
AI is full of promise and also challenges our minds to do the hard thinking.
Advantages:
- Smart healthcare diagnosis
- Personalized learning
- Weather prediction and disaster simulations
- Faster science and technology innovation
Disadvantages:
- Bias: AI can be biased in decision-making if AI is trained using biased data.
- Job loss: Automation will displace some jobs, especially repetitive ones.
- Privacy: AI systems gather huge amounts of personal data.
- Ethics: Who would be liable if an AI erred — the maker, the user, or the machine?
The emergence of AI presses us to redefine what it means to be human in an intelligent machine-shared world.
6. The Future of AI — Collaboration, Not Competition
The future of AI is not one of machines becoming human, but humans and AI cooperating. Consider physicians making diagnoses earlier with AI technology, educators adapting lessons to each student, or cities becoming intelligent and green with AI planning.
AI will progress, yet it will never cease needing human imagination, empathy, and morals to steer it.
Last Thought
Artificial Intelligence is not a technology — it’s a demonstration of humans of the necessity to understand intelligence itself. It’s a matter of projecting our minds beyond biology. The more we advance in AI, the more the question shifts from “What can AI do?” to “How do we use it well to empower all?”
See less
1. Start with the Problem — Not the Model Specify what you actually require even before you look at models. Ask yourself: What am I trying to do — classify, predict, generate content, recommend, or reason? What is the input and output we have — text, images, numbers, sound, or more than one (multimoRead more
1. Start with the Problem — Not the Model
Specify what you actually require even before you look at models.
Ask yourself:
For example:
When you are aware of the task type, you’ve already completed half the job.
2. Match the Model Type to the Task
With this information, you can narrow it down:
Task Type\tModel Family\tExample Models
Text generation / summarization\tLarge Language Models (LLMs)\tGPT-4, Claude 3, Gemini 1.5
Image generation\tDiffusion / Transformer-based\tDALL-E 3, Stable Diffusion, Midjourney
Speech to text\tASR (Automatic Speech Recognition)\tWhisper, Deepgram
Text to speech\tTTS (Text-to-Speech)\tElevenLabs, Play.ht
Image recognition\tCNNs / Vision Transformers\tEfficientNet, ResNet, ViT
Multi-modal reasoning
Unified multimodal transformers
GPT-4o, Gemini 1.5 Pro
Recommendation / personalization
Collaborative filtering, Graph Neural Nets
DeepFM, GraphSage
If your app uses modalities combined (like text + image), multimodal models are the way to go.
3. Consider Scale, Cost, and Latency
Not every problem requires a 500-billion-parameter model.
Ask:
Example:
The rule of thumb:
4. Evaluate Data Privacy and Deployment Needs
If your business requires ABDM/HIPAA/GDPR compliance, self-hosting or API use of models is generally the preferred option.
5. Verify on Actual Data
The benchmark score of a model does not ensure it will work best for your data.
Always pilot test it on a very small pilot dataset or pilot task first.
Measure:
Sometimes a little fine-tuned model trumps a giant general one because it “knows your data better.”
6. Contrast “Reasoning Depth” with “Knowledge Breadth”
Some models are great reasoners (they can perform deep logic chains), while others are good knowledge retrievers (they recall facts quickly).
Example:
If your task concerns step-by-step reasoning (such as medical diagnosis or legal examination), use reasoning models.
If it’s a matter of getting information back quickly, retrieval-augmented smaller models could be a better option.
7. Think Integration & Tooling
Your chosen model will have to integrate with your tech stack.
Ask:
If you plan to deploy AI-driven workflows or microservices, choose models that are API-friendly, reliable, and provide consistent availability.
8. Try and Refine
No choice is irreversible. The AI landscape evolves rapidly — every month, there are new models.
A good practice is to:
In Short: Selecting the Right Model Is Selecting the Right Tool
It’s technical fit, pragmatism, and ethics.
Don’t go for the biggest model; go for the most stable, economical, and appropriate one for your application.
“A great AI product is not about leveraging the latest model — it’s about making the best decision with the model that works for your users, your data, and your purpose.”
See less