TECHNOLOGY

What Can AI Actually Do Now That It Couldn't Do Two Years Ago?

Nicole Foster
Mar 13, 2026

AI used to feel like a search engine with better grammar. Now it writes code, builds applications, reasons through legal documents, and generates videos from a sentence. If you haven't checked in for a while, the gap between what you remember and what's real today might catch you off guard.

The Quiet Shift Nobody Talks About Enough

Most coverage of AI focuses on product launches and benchmarks. But the real change has been quieter and more fundamental: AI stopped being a tool you go to and became something embedded in the background of work.

Two years ago, you opened ChatGPT, typed a question, got an answer, and closed the tab. Now AI sits inside your code editor, your design app, your spreadsheet, your customer support queue. It reads your Slack threads and suggests follow-ups. It watches your database and flags anomalies before you check your dashboard. The interaction model flipped from pull to push — you don't always ask AI for help anymore. Sometimes it just shows up.

This shift matters because it changes who benefits from AI. It used to favor people who were good at prompting. Now it favors teams that know how to integrate AI into workflows where it runs quietly, doing useful work without anyone explicitly asking. That's a different kind of skill, and a lot of organizations haven't caught up yet.

Reasoning, Not Just Pattern Matching

The biggest technical leap is in reasoning. Early language models were impressive mimics — they could produce text that sounded right without actually understanding the problem. Ask them to plan a multi-step project or debug a subtle logic error, and they'd confidently give you garbage.

Current models are different. Claude, GPT-4, and Gemini can hold context across extremely long conversations, track dependencies in complex documents, and catch contradictions humans miss. A lawyer I spoke with last month said she now runs every contract through Claude before reading it herself — not to replace her judgment, but because the model catches issues she'd typically find on the third pass, not the first.

This improvement came from techniques like reinforcement learning with human feedback, chain-of-thought prompting built into model training, and much larger context windows. The practical result is that AI moved from "useful for first drafts" to "useful for review, analysis, and decision support." That's a meaningful jump because review tasks are where professionals spend most of their time.

Creative AI Got Quietly Good

While the tech world argued about whether AI art counts as real art, the tools just kept improving. Image generation hit a point where the average person can't tell AI-generated photos from real ones. Video generation went from "blurry five-second clips" to coherent scenes with consistent characters, lighting, and motion.

But the more interesting development is how creative professionals actually use these tools. They're not replacing illustrators or filmmakers. They're being used for things nobody had time for before — rapid concept exploration, style testing, mood boarding, storyboard drafts. One creative director described it as "being able to think visually at the speed of conversation." She sketches ideas faster, kills bad ones earlier, and arrives at final concepts with more confidence.

Writers have found a similar rhythm. AI doesn't write their articles for them, but it handles the scaffolding — research summaries, structural outlines, alternative phrasings — so the writer can focus on voice, argument, and nuance. The people doing the best AI-assisted writing treat the model like a research partner, not a ghostwriter. That distinction shapes everything about the quality of the output.

The Open Source Factor Changed the Game

Something that doesn't get enough credit: the open source AI movement fundamentally altered the power dynamics of this industry. Two years ago, if you wanted a capable language model, you paid OpenAI or Google. There was no third option.

Now Llama, Mistral, Qwen, and dozens of other open models run on personal hardware. A developer with a decent laptop can download a model, customize it for a specific task, and deploy it without ever sending data to a third-party server. That matters enormously for healthcare, legal, finance, and government organizations where data privacy is non-negotiable.

It also matters for innovation speed. When a model is open, thousands of researchers and engineers improve it simultaneously. Fine-tuned variants appear within days of a base model release. Entire communities build specialized tools on top of open foundations. The pace of development in open source AI right now is something the software industry hasn't seen since the early days of Linux.

The competitive pressure from open source also forced commercial providers to improve faster and lower prices. That benefits everyone — whether you use open models or paid platforms, the quality-to-cost ratio keeps getting better.

Multimodal AI Changed What's Possible

The term gets thrown around a lot, but multimodal AI genuinely changes what you can ask a machine to do. Older models worked with text only. Current systems process images, audio, video, code, and documents natively — all within the same conversation.

You can photograph a whiteboard sketch, paste it into a chat, and ask the model to turn it into a structured project plan. You can upload a spreadsheet and a chart and ask "does the data in the spreadsheet support the conclusion this chart implies?" You can feed it a video recording of a user testing your product and get a structured usability report.

These aren't theoretical capabilities — they're things people do every day. The shift from text-only to multimodal is similar to the shift from command-line interfaces to graphical ones. It didn't change what computers could theoretically do, but it changed who could actually use them and for what.

What's Still Broken (And Why It Matters)

It wouldn't be honest to skip the problems. AI still hallucinates — it generates confident statements that are completely wrong. It still struggles with math, precise counting, and tasks that require genuine logical proof rather than pattern-based reasoning. It reflects biases present in its training data, sometimes in subtle ways that are hard to catch.

The reliability gap matters more now than it did two years ago, precisely because more people trust AI with more important tasks. When someone uses AI to draft a legal brief or a medical recommendation, a hallucinated citation or a biased suggestion has real consequences.

The industry is working on this from multiple angles. Retrieval-augmented generation (RAG) connects models to verified data sources instead of relying on memory. Structured output validation catches format errors before they reach production. Human-in-the-loop systems ensure that critical decisions always get a second pair of eyes.

But the most important fix is cultural, not technical. Teams that use AI well build habits around verification. They treat AI output as a starting point, not a final answer. The organizations getting burned are usually the ones that automate without auditing.

Where This Goes Next

Prediction is tricky with AI because the field moves faster than most forecasts. But a few trajectories seem clear.

AI agents — systems that don't just respond to prompts but take independent action across tools and workflows — are the next major frontier. Early versions already book meetings, manage projects, file reports, and coordinate across applications. They're clumsy now, but improving fast. Within a year or two, the distinction between "AI assistant" and "AI coworker" will blur considerably.

Personalized models are another direction worth watching. Instead of one giant model serving everyone the same way, we're moving toward systems that learn individual preferences, writing styles, and decision patterns over time. Your AI assistant in 2027 will probably know your communication style better than most of your colleagues do.

The integration layer is also maturing. Right now, connecting AI to business systems requires meaningful engineering effort. Soon it won't. AI platforms are building native connectors to hundreds of common tools, making it possible for non-technical teams to set up sophisticated AI workflows without writing code.

FAQ

What's the biggest difference between AI now and two years ago?
Reasoning quality and integration depth. Current models understand context, catch contradictions, and connect to external tools and data. They moved from "interesting toy" to "daily work tool" for many professionals.

Can AI replace creative professionals?
Not in a meaningful way. AI accelerates parts of the creative process — brainstorming, concept testing, drafting — but the taste, judgment, and originality still come from humans. The best results come from treating AI as a creative partner, not a replacement.

What are open source AI models and why do they matter?
Models like Llama and Mistral are freely available to download, modify, and run on your own hardware. They give organizations full data control, eliminate ongoing API costs, and drive faster innovation through community contributions.

Is AI reliable enough for important business decisions?
It's reliable as a decision support tool when paired with human review. It's not reliable as a standalone decision maker. Teams that verify AI output consistently get excellent results. Teams that skip verification run into problems.

What should someone learn to stay relevant as AI evolves?
Focus on judgment skills that AI amplifies — critical thinking, problem framing, output evaluation, and workflow design. Technical AI skills help, but knowing how to direct and verify AI matters more for most roles than knowing how to build it.

Similar News