Szymon Kaliski

LLM

  • LLMs are like "lossy" compression over all the (easily(?) accessible) text on the internet
    • you can then "run" the network with some specific input and it will come up with more characters (tokens) based on "lossy un-compression"
  • the models are trained by optimizing next character (token) prediction based on what's already in the context, and that works surprisingly well for language tasks
    • but, the models are all "System 1" (Thinking, Fast and Slow) - they can "blab", but have hard time "thinking" through things; providing "System 2" is currently being explored with various means
      • one of the ideas is providing "tools" - ChatGPT 4 can already browse and use Python during its response - the way it works is that it emits special tokens that the code "in between" the modal and the user parses, stops feeding the model and instead evaluates the code, and then returns the result back into the context

LLMs like focus

Just like humans, LLMs don't perform great when you ask them to multitask. An early version of Whole Earth AI sent the entire lesson generation prompt (with all its tasks) to a single LLM call. So in the case of our demo, all of the content for the lesson "Establish Project Parameters" including instructions for "Fundamentals of rooftop greenhouses, hydroponics, and smart-ag basics", "Choose primary crop species to optimize system design", "Measure and record rooftop footprint & load limits", and "Determine local building code & permitting requirements" would all be handled by a single LLM prompt.

That didn't work well. What did work well was splitting all of these task instructions up into dedicated LLM calls.

The less you ask of the AI, the better, therefore, Whole Earth AI breaks up content as much as possible before passing to an LLM.

What I Learned Building Whole Earth A ↗ - Kasey Klimes

Backlinks

  1. 2026-01-05Independent Consulting, and Interfacing with LLMs5
  2. 2025-10-13Open Questions Around LLM Interfaces15
  3. 2025-09-29Bi-Directional State Synchronization in React, and Graphical Notation in Figma4
  4. 2025-09-26VPLs and LLMs3
  5. 2025-09-26"Learning to Program" and LLMs2
  6. 2025-09-25VPL1
  7. 2025-09-25Future Of Coding1
  8. 2025-06-30Prototyping Component Re-Use, and the Simplest Whisper Wrapper1
  9. 2025-03-31Motorizing External Blinds, Dry Filament, and Yearning for a Software Scope1
  10. 2025-01-06Back at it, Dampening Copilot, and 3D-Printed Organization6
  11. 2024-09-05Replit AgentIDE for Humans and LLMs7
  12. 2023-04-03Joining Replit, and musings from the Job Hunt1