tablestakes.net
Login
Thursday, February 12, 2026AI

Response to Matt Shumer's 'Something Big Is Happening'

The following essay is a critical review of a viral new article: https://shumer.dev/something-big-is-happening

Executive Summary

AI advancement is real and overstated by industry insiders with direct economic incentives to maximize hype. The article under review exemplifies this troubling pattern: dramatic claims about imminent capability breakthroughs, supported by cherry-picked anecdotes rather than rigorous evidence, wrapped in urgent calls to action that conveniently benefit AI companies' revenue models.

As a recovering investment banker and developing engineer, I have some perspective to share on this article. I have practical experience using AI tools professionally across finance and software development, and formal training in AI/ML from my undergraduate Industrial Engineering degree. However, I want to be transparent about the limits of my technical expertise: my academic background predates the transformer revolution and modern scaling paradigms. Since 2020, the field has seen fundamental shifts—transformer architectures, large-scale pre-training, reinforcement learning from human feedback (RLHF), retrieval-augmented generation (RAG), emergent capabilities from scale, and multimodal models. I don't have hands-on experience building these systems or working in AI research labs. This positions me somewhere between the true experts advancing the field and the general audience Matt is writing for.

My primary critique is that Matt's article is little more than thoughtless sensationalism. To his credit, it's readable and went viral. But people curious about AI deserve better than hype dressed as insight. What follows is my attempt to fix his article. I've added the critical context he omitted, identified the logical gaps he papered over, and noted where evidence should exist but doesn't. I'll use the framework from my philosophy courses: identifying the premises Matt proposes, then examining whether his conclusions actually follow. Below, I've categorized his article so readers can judge whether I've represented his reasoning fairly.

Conflict of Interest

Let's establish context of our author. The author self-describes as "founder & CEO at OthersideAI (HyperWrite). I build and invest in AI products, ship fast, and share what I learn with a large audience." This isn't a researcher publishing peer-reviewed research or a journalist attempting balanced reporting... this is an industry participant with financial interests in AI adoption.

For additional perspectives, consider the evolving views of: Sam Altman, Dario Amodei, Yann LeCun, Alexandr Wang, Geoffrey Hinton, Demis Hassabis, Andrej Karpathy, Fei-Fei Li, Andrew Ng. Note how their public statements may have shifted alongside their stations in life, roles / positions at firms, universities, etc. None is without bias but there is more variance in the views of leading AI researchers these days than this article accounts for.

Pulling Out of the Spin

Let me be clear about what the article does promote:

  1. Subscribe to paid versions of Claude or ChatGPT
  2. Use the most expensive models (maximizing token usage and overage costs)
  3. Upload sensitive workplace data to third-party AI services
  4. Work harder using AI tools to outcompete peers
  5. Accept that dramatic AI-driven change is inevitable

It is not educational content, it is part of the sales funnel. The author positions himself as an experienced insider helping the uninformed masses, then delivers recommendations that benefit AI companies' bottom lines. It seems to me that he is taking advantage of a non-technical audience.

While I agree using a paid version of LLM to ask big questions can be a productive learning or creation environment, data privacy is not addressed and resignation to the power of big AI is encouraged. Users are not taught much of anything in the original article that could help them protect themselves from manipulative AI sales pitches. I try to critique the article line by line, hopefully that way a reader can get multiple perspectives and benefit from the bull/bear contrast of Matt’s hype view and my more critical view.

Argument Summary & Critique

P1: COVID took most of us by surprise in Feb 2020.

P2: Over the course of 3 weeks, the entire world changed.

C1: Therefore, We’re similarly in a ante-catalyst phase w.r.t. AI.

Critique: Why is COVID the right template here? Unlike COVID, we can stop the spread of AI by turning it off if we don't like its impact, rather than have to create a cure. The COVID framing specifically triggers a fear based response to sudden change.

P3: Author is very experienced in AI startups and investing.

P4: Author is writing this to educate non-technical people around him.

P5: Author fears that the disruption will be much more severe than people anticipate.

C2: Therefore, Author can help people deal with disruption by sharing some advice and perspective.

Critique: This establishes ethos without examining incentives. Someone with AI startup investments benefits from widespread AI adoption, regardless of whether that adoption is economically rational for the adopters.

P6: The future is being shaped by small number of researchers at handful of companies, author is not one.

Critique: This dramatically understates the importance of implementation. Yes, researchers train the models and design architectures. But the UI layer, workflow integration, and product design choices often matter more than marginal improvements in base model capabilities. LLMs may commoditize, making the most interesting competitive differences entirely separate from model weights. The researchers aren't the whole story.

P7: GPT-5.3 Codex and Opus 4.6 have judgement and taste, unlike prior models.

Critique: This is where things fall apart. "Judgement" is not a technical property that can be objectively verified. It's a subjective impression from cherry-picked interactions. Searle's Chinese Room argument rebutted this kind of anthropomorphic attribution decades ago.

Can we test whether an LLM has "good judgement" across all possible prompts? No, not in practice or principle. The stochastic nature of text generation means identical prompts can yield different outputs. Furthermore, there is likely to be so much variance in typical user input that minute differences in initial prompt can bias the LLM towards or away from certain words or phrases that may impute a character of judgement into the response text that is incidental.

Perhaps when I ask GPT "how to make a website" it gives a less mature response with seemingly less taste than when i ask "how to create a professional website efficiently", but that does not mean that ChatGPT has taste. Perhaps each prompt activates different nodes which result in different responses that appear to have different judgement but are really just carried through from training on the training data into the response.

There is a risk in being quick to rely on the appearance of a quality like judgement or taste, rather than to carefully observe and measure quality, consistency and reliability. Due to stochastic nature, we must test further before concluding on basis of small n sample size.

P8: GPT-5.3 Codex replaced the author for 4 hours without correction or intervention based on a single prompt.

Critique: I have some questions...

  • How long spent writing the prompt?
  • How much prior knowledge was required to write an effective prompt?
  • Can human intervention be removed entirely while maintaining quality?
  • How do these capabilities scale with additional prompts and size of context window?
  • How much precision required to manage the context window and prompts to focus AI on specific areas vs. one-shot fix over whole codebase without user spec?

C3: Therefore, GPT-5.3 is better than humans for the role of “software engineer”.

Critique: From my experience building a chatbot using Python, WebUI, and VLLM, LLMs provide minimal capability out of the box. They don't even differentiate between prompted text and generated text without manual intervention. I had to explicitly label user prompts and assistant outputs to achieve basic chatbot functionality.

Much of the additional reasoning capability comes from further layers of statistical pattern recognition applied on top of the word prediction modeling the LLM performs that associates words into groups to simulate concepts. Or there is deterministic application specific logic used around the LLM model, for example using deterministic rules to find a cell in a spreadsheet, then using the LLM to populate its contents.

A research puzzle in early 2026 appears to be how to implement these LLMs into many traditional roles. While I agree that there are many areas where AI tools can add value to a human professional with years of experience performing a task, I believe the challenge of removing the human entirely is of a different kind. Similar to a traditional software automation, there has to be a large amount of work up front to determine the requirements of a system and then there is work to implement it. AI may help with the design and implementation, but this build phase requires human judgement to assess if it meets the bar of performance to go live and maintain.

It’s still very early to know how that will play out.

C4: Therefore, AI labs will surpass human quality for all jobs, likely sooner than 1 to 5 years.

Critique: This is an extraordinary claim requiring extraordinary evidence.

P9: AI is much better than it was in 2023 or 2024.

C5: Therefore, AI has not hit a limit.

Critique: The author completely ignores the scaling limits debate. Some researchers believe we've hit diminishing returns on parameter increases. Even if true, engineering improvements (like those I described above) can still yield better products.

C5a: Additionally, any debate that AI has hit a limit is over and anyone making that argument has an incentive to downplay what is happening.

Critique: I believe Matt is saying that anyone who disagrees with him is in bad faith, which is itself a statement of bad faith.

Of course AI has limits. It requires physical hardware, power, cooling, and obeys the laws of physics. The number of atoms in the universe is finite. Arguing otherwise is absurd. The author likely means "we can expect substantial continued improvement," but the framing serves to shut down opposition and makes collateral damage of the truth.

P10: The gap between current model capabilities and public understanding is growing, which is dangerous insofar as it prevents people from preparing.

Critique: This is classic Silicon Valley narrative. Firms like Facebook, Amazon, Microsoft and Google said the same thing about learning to code. Tech companies hired armies of marketers to evangelize that kids who couldn't code would be "left behind."

What happened? Companies got a talent glut, used oversupply to hire at lower costs, and withheld training by forcing candidates to compete on project experience (de facto free training).

We know what's worth our time better than tech companies do. In fact, they're the absolute worst at giving advice on what to do next and by design; their advice helps them and is at best indifferent, at worst adversarial, to regular people's interests.

P11: This gap persists because too many users are only using the free AI tools, which are 1 year behind payed tools.

Critique: I agree people should try paid tools, but I also think that users should stretch their AI dollar. It is free to use a private browser to prompt with ChatGPT as of writing. At $20/month for a paid account on GPT or Claude with current usage limits, VCs are subsidizing prompts. Some of my prompts probably cost more than $20 in inference costs and runtime was less than a day, let alone a month. We're in a temporary period where economics favor users as platforms seek to attract them. But I doubt that would continue if a critical mass of firms onboarded the AI platforms.

P12: The people paying for the best tools know what is coming.

P13: One lawyer the author knows is using payed AI tool and it works and is improving every couple of months.

Critique: Anecdotal evidence. No number of lawyer testimonials can form the basis for six-sigma quality guarantees that enterprise customers will expect. There's a fundamental issue: LLM output is stochastic, and natural language meaning is not fixed (see Gödel incompleteness). It may not be possible to guarantee a priori reliability for a natural language generating agent without serious assumptions or scope limitations. And at that point, aren't we just recreating programming languages and compilers?

C6: Therefore, in the lawyers’ view, soon the AI will take over his Managing Partner work.

Critique: A lawyer saying AI might take his job doesn't entail that AI can successfully perform that job. Even lawyers disagree about which lawyers are worthy of their positions. Surely they'll disagree about whether an AI is qualified. Finding 1 out of 10 dentists who recommend AI is vulnerable to sampling bias. I would be more convinced by a legal industry survey.

P14: From 2023 to 2026, AI has made tremendous progress and can pass bar exams and more.

Critique: Bar exam success is less impressive than it sounds. If you train an LLM on questions plus answers, of course it passes. Even with changed questions, an LLM with sufficient training data finding general solutions isn't surprising, particularly since law, like code, is more formalized than natural English.

P15: METR has not yet evaluated the latest tool but expected to surpass 4 months time on task.

P16: Amodei claims AI that is better than all humans at most tasks is coming in 2026 or 2027.

Critique: says the guy talking his book. He also says things like "AI could destroy the world and that's why we have to build it." The doom and gloom is part of the sales pitch. Create existential stakes, position yourself as the responsible builder, then ask for billions in funding and regulatory capture that conveniently excludes competitors.

P17: GPT-5.3 Codex was developed with team using its own pre-release versions to aide them with parts of the workflow.

C7: AI is now intelligent enough to contribute to its own improvement.

Critique: This framing is deliberately misleading. The author makes it sound like GPT examined its own code and autonomously improved itself. What actually happened? Developers used it with specific prompts and context to make specific changes with human-set goals and human review. These humans were assisted by the AI tool as they made changes to its underlying codebase. That's not the same as AI writing itself.

P18: AI Is different than previous waves of automation in that it can replace (white collar knowledge) workers generally, not just specific tasks.

Critique: Maybe. But I'd argue AI, while mimicking human language patterns including emotion, doesn't actually experience emotions. They don't use hormones to modulate model weights and yield outputs attenuated by happiness, sadness, or anger. They produce the next word.

White collar knowledge workers use more than analytical skills. The best salespeople leverage desire, fear, disgust, envy, rapport, and humor to build relationships. The pace of relationship development is mediated by emotions processed by both parties. We're incredibly far from simulating these complex emotional responses our species orchestrate social games around; and few researchers are even attempting it.

Here's a secret AI enthusiasts don't want to share: neural network architecture isn't actually that similar to the human brain. Yes, it's more brain-inspired than a CPU. Many computing developments have made computers more brain-like. Turing, Von Neumann, and Hinton were all brain-inspired in part.

But Hinton really hung his hat on this, arguing in the 1980s that neural networks were identical to human brain structure. We know this is false. Computers work with bits instantiated as on/off electrical signals. The brain also uses electricity but adds chemical and hormonal changes. Neural network nodes take values on [0,1]. Human synapses have multiple signals and receptors, processing far more information than a single neural network node. The human brain has vastly more compute (though not all simultaneously available) than any computer, and operates far more energy-efficiently.

P19: A long list of jobs will be replaced and there is no obvious area to retrain for because AI can learn that too.

C9: Any job that can be done on the computer is a target for replacement by AI and already is in process.

Critique: More doom and gloom, but with a logical hole. If AI is taking over, wouldn't learning to help companies use AI be valuable? Will CEOs handle AI integration personally? It seems to me natural to assume that managers will always want to delegate responsibilities. Even if AI creates net job losses, understanding how AI works seems like one potentially useful skill in a post-AI world, although the author recommends instead to get hands on experience using LLMs, rather than learn their fundamental theory underpinning the technology.

That said, I agree any computer-based jobs like data entry where the entirety of the job or vast majority is computer work not evaluated by domain-expert human judges or clients are automation targets. Companies like Mercor are working on this. It requires enormous upfront investment, but it's clearly underway for call centers and drivers of cars. Maybe it’ll work on bankers and lawyers; there is an awful lot of grunt work that can be automated, but it’s not clear the end-to-end client delivery model that is human relationship oriented can be so easily displaced.

P20: Author provides a list of recommendations how best to adapt to new AI era.

P21: AI is the biggest national security threat ever.

P22: AI could have great upside if we can cure cancer, cheat death, etc.

P23: AI could be terrible if we get it wrong.

P24: The richest institutions are committing trillions to AI.

C10: AI is coming to our world quickly and its already here.

In Conclusion

Since much of this article is meant to be scaremongering folks to learn AI to avoid losing their jobs to AI, let me wrap up this review with an alternative view regarding reasons behind layoffs. Based on work I did as an investment banker analyzing tech layoff announcements in 2024, I can provide some grounding in reality. The actual drivers of recent workforce reductions.

Primary Factors for 2025 Layoffs:

  • 2021 Overhiring: Many companies massively overextended during the pandemic boom, they are churning their 10% dutifully as Jack Welch would have, but additional layoffs were needed to rationalize to pre-pandemic levels
  • Economic Slowdown: Outside AI and healthcare, growth has decelerated significantly. This has been masked by some confounding factors in geopolics, macro and even novel trends in markets to some extent
  • Tariff Policy and Geopolitical Uncertainty: Creating planning challenges and margin pressure

The AI Attribution Game:

Many American tech companies claim on earnings calls that layoffs stem from AI or automation. But it is hard to attribute jobs replaced to new end-to-end AI tools that are actually in market. Furthermore, there is bullish response to making an AI-based layoff announcement, and CEOs know it.

Layoffs typically signal cost reduction (earnings, margin expansion) while AI attribution adds innovation narrative (growth & further operating leverage). It's been unusual the past two years compared to longer term historical patterns that layoff announcements result in higher not lower share prices.

My belief: the market has not yet efficiently processed these announcements. Investors have held off doing the work to determine whether companies are cutting muscle or fat. The immediate benefit of cost cuts often gets offset by longer-term revenue growth deterioration, but that takes time to show up in 8-ks and for the market to piece it together. The market currently reflects a belief that offshoring, AI and/or further leveraging existing employees will replace these lost roles, not that there will be an AI-jobpocolypse.

AI Impact On Hiring:

There is some data here from ADP that I found helpful. The punchline is that industries where employees are exposed to AI like software engineering and customer service are more likely to have hiring slowdown impact due to AI: https://www.adpresearch.com/yes-ai-is-affecting-employment-heres-the-data/