AI Reading for Wednesday August 27

95% of statistics are made up

Here is my crack at explaining that viral MIT paper saying something about 95%/5% . I think this is the key chart:

I think what this is saying is, suppose your are a biglaw firm, just as an example. 60% of firms have looked at implementing a comprehensive tool like Harvey, 20% piloted it, 5% have measurable improvement in productivity or profitability KPIs. So actually 25% of these pilots are successful.

The dark blue is firms that rolled out e.g. ChatGPT Enterprise. So it turns out that when you do these big top-down projects, a lot of people say, why should I use Harvey, I have my own ChatGPT prompts that I can customize and improve, and workflows that work better.

Or in financial services, substitute BlueFlame or Auquan or Hebbia or Rogo although I mostly hear good things (but occasionally the narrative above)

The key thing is you have to customize the Harvey for those workflows, build in observability and see what works, and improve it for everyone over time, with agentic workflows feeding it the right context for what they actually do with it. Also good training on using the agentic multi-turn workflows so users get maximum benefit exceeding single-turn chat prompts or repeated cut and pastes.

If people get benefit from chatbots, they should get more benefit from longer scripted versions of what they do daily. But a lot of time there is lack of alignment, so they don’t.

The overall paper has a lot of nuance that resonates, AI is stochastic and evolving fast, you have to learn how to implement it in your context.

AI is hard to measure, you give coders Claude and it helps them and they use it, but exactly how much more productive are they? The chatbots help, shadow AI that people build in the trenches is aligned with their business needs, AI helps grunt work but less for highest-value work.

They put a clickbait headline on it and it went viral, which was the intent, but what went viral was, 95% of the time AI adds no value which is not what they said.

The eye sees what it wants to see, or brings the power to see … people don't do nuance, they want certainty. AI is either useless, or magic beans, there can be no middle ground. Like, it’s hard, it takes time, you have to learn how to do it right and adapt as it changes, and train people, and success is hard to measure.

Follow the latest AI headlines via SkynetAndChill.com on Bluesky