- SkynetAndChill by druce.ai
- Posts
- AI Reading for Monday August 18
AI Reading for Monday August 18
An MIT report says about 95% of enterprise generative‑AI pilots fail to impact P&L, citing integration gaps and misallocated budgets. - nanda.media.mit.edu

Members of the class of 2026 have had access to AI since they were freshmen. Almost all of them are using it to do their work. - The Atlantic
CEOs want companies to adopt AI, but aren’t very hands-on with it, so they turn to younger employees to lead the way. - The New York Times
This CEO laid off nearly 80% of his staff because they refused to adopt AI fast enough. 2 years later, he says he’d do it again - Yahoo Finance

If AI takes most of our jobs, money as we know it will be over. What then? - The Conversation

Skinner's bomb-guiding pigeons, behaviorism, and reinforcement learning. - MIT Technology Review
If you let a simple RL learning loop optimize a sufficiently powerful deep learning network long enough with enough data, it learns the differential equations that govern a system like a reverse pendulum. At least, it learns enough math to obtain the objective of balancing the pendulum through complex calculations and feedback. (skip to around 5:30 to watch it after it learns for a while).
it's learning dynamics, and not memorizing. It’s just not learning symbolic reasoning the way an engineering student learns from an applied math textbook. Probably if you start it in a weird state it hasn't seen before, it fails. This particular robot, can’t solve generalized systems of differential equations or Kalman filters.
But with enough data and enough compute a similar system can learn the dynamics of any system, and eventual learn to auto-pilot anything, like self-driving cars.
So it’s a bit semantic as to whether reasoning models that learn to follow human-like trains of thought are reasoning, or not. They are not reasoning the same way as humans, they are much less data-efficient, they are not able to learn on the fly, but they are moving towards similar goals.
AGI would be able to do those things, learn on the fly in a data-efficient way, so I could explain Go rules and it could quickly play passably, as opposed to playing like a master after thinking for days or weeks, and not being able to generalize that knowledge to similar games without repeating that training.

AI slop pays, so it is flooding video platforms - The Washington Post

Small tamale shop gets 22m views on weird AI-generated ad about falling out of an airplane - Business Insider

The AI vs authors results! (part 2). People just can't recognize AI writing consistently. - Mark Lawrence

IVIX software to find tax evasion and help financial fraud law enforcement raises big money - The Wall Street Journal
DARPA pays out millions for AI that can fix open-source vulnerabilities - Cybersecurity Dive
LLMs + Coding Agents = Security Nightmare - Gary Marcus Substack
definitely a challenge for security professionals but AI can help.
the 2 things that kept me awake as a CTO were someone wiring funds to a casino in the Philippines after getting fooled by a con artist, and ransomware like Sony in 2014.
you have to really double down on training staff since the AI social engineering is so much better. and have AI-enabled security operations.
a lot of security folks have a knee-jerk reaction to block a lot of AI stuff, but that handicaps firms competitively, you have to think through the threat model carefully.
as a working model I would tend to try to give AI access to any data you would give your rank-and-file employees. obviously e.g. payroll and other sensitive stuff needs restrictions. but broadly, you pick your AI posture with on-prem or enterprise AIs like bedrock or public cloud like ChatGPT and then monitor it and give it access.
on the other hand where agents update systems and files, everything needs structure and security review, and least-privilege access controls. AI guardrails are like policies and procedures, good to have but no guarantee they will be followed or someone won’t get socially engineered. a recipe for disaster is access to data + untrusted prompts (or autonomy) + access to actions (like email or other exfiltration vectors or modifying/deleting).
Follow the latest AI headlines via SkynetAndChill.com on Bluesky