Day 205 of 1000: Artificial intelligence opportunities in 2026

I’m undertaking a 1000-day reinvention project, blogging here daily to track my progress. In Monday Musings, I write freely and wanderingly about some topic that’s on my mind.

In his 2025 LLM Year in Review, Andrej Karpathy writes of the emergence of a new layer of LLM apps, exemplified by Cursor:

3. Cursor / new layer of LLM apps

What I find most notable about Cursor (other than its meteoric rise this year) is that it convincingly revealed a new layer of an “LLM app” – people started to talk about “Cursor for X”. As I highlighted in my Y Combinator talk this year (transcript and video), LLM apps like Cursor bundle and orchestrate LLM calls for specific verticals:

  1. They do the “context engineering”
  2. They orchestrate multiple LLM calls under the hood strung into increasingly more complex DAGs, carefully balancing performance and cost tradeoffs.
  3. They provide an application-specific GUI for the human in the loop
  4. They offer an “autonomy slider”

A lot of chatter has been spent in 2025 on how “thick” this new app layer is. Will the LLM labs capture all applications or are there green pastures for LLM apps? Personally I suspect that LLM labs will trend to graduate the generally capable college student, but LLM apps will organize, finetune and actually animate teams of them into deployed professionals in specific verticals by supplying private data, sensors and actuators and feedback loops.

I haven’t used Cursor myself, but I understand it is a software development tool that integrates AI into Microsoft’s Visual Studio Code integrated development environment (IDE).

Based on my experience using ChatGPT and Gemini for a variety of tasks including writing blog posts and newsletter articles, editing book chapters, planning my day, and doing Tarot readings, I can see how useful this would be. The chat interface is engaging and easy to use but doesn’t lend itself to efficiently processing structured information. As Karpathy notes later in the blog post:

“chatting” with LLMs is a bit like issuing commands to a computer console in the 1980s. Text is the raw/favored data representation for computers (and LLMs), but it is not the favored format for people, especially at the input. People actually dislike reading text – it is slow and effortful. Instead, people love to consume information visually and spatially and this is why the GUI has been invented in traditional computing.

There are a number of ways the current interface slows me down:

  • It’s difficult to find information that I already generated among the incredibly verbose output of ChatGPT and Gemini
  • I have to tell the chatbot the same thing multiple times (I could save these as system instructions but I rarely do)
  • Information that is generated is not always structured in useful ways such as a tabular interface or a checkbox list

An app that wraps around an LLM for each basic task I do would be very helpful to me, if it worked well.


Perhaps I could vibe code such a thing myself? Create an interactive Tarot reader? Develop a “next right thing” daily planning tool? Build a book reviewer app that will do a development edit?

Karpathy writes:

2025 is the year that AI crossed a capability threshold necessary to build all kinds of impressive programs simply via English, forgetting that the code even exists. Amusingly, I coined the term “vibe coding” in this shower of thoughts tweet totally oblivious to how far it would go :). With vibe coding, programming is not strictly reserved for highly trained professionals, it is something anyone can do.

I haven’t tried this yet but maybe I will in 2026. Maybe 2026 is the year I dive back into AI, not as an AI/ML engineer again, but as a power AI user. Last week I shared how I’m going to write for AI, as a way of communicating with humans. Perhaps that’s not the only AI exploration I’ll do.


Posted

in

by