Rosie the Robot for Everyone

Automatons will be everywhere doing everything!

I’ll share which AI tools I’m exploring, some experiments I’m conducting, and insightful information about what I’m observing in the world every week.

🔧 Three Tools I’m Testing

⛈️ Storm - Another deep research tool that writes articles in the style of Wikipedia. I’ve tried it a little, but I don’t know how I will use it versus other options. It was created at Stanford.

🎨 Polymet - Very cool design oriented tool that allows you to explain what you’d like to design and then adjust from there. I really like the output, but I haven’t decided if I need it or if I’d pay for it.

📧 Reclaim - AI calendar management tool from Dropbox. I’m obsessed with getting all my calendars together and organized. If it isn’t on my calendar it gets missed. I will continue working with Reclaim to refine my daily calendar management.

🧪 AI Experiment of The Week

I wrote a children’s book for my son, Wesley, for his fourth birthday—it is available very expensively on Amazon. I’m starting to work on the next book for our middle son, Evan. One area I’ve been very interested in with AI is using it to create my illustrations for the new book. This has led me down the rabbit hole of creating consistent characters across image generation.

My experiment this week involves using Dzine, a tool that has excellent features for creating consistent characters and building scenes with them.

I first trained a new character model on several pictures of Evan. Then, I used that trained model to adjust the output style and create a scene to test the output. The beauty is that you can create multiple characters and build scenes with them. I think I’ve found the tool for my book illustrations.

Evan by the ocean with an octopus

The prompts used for the above generated scene are the character prompt with used Evan’s model, and then I adjust specifics.

Evan, a young male toddler with light skin, blue eyes, and blonde curly hair, wearing a plaid shirt, in a illustration style like a comic book

And the scene prompt.

Evan is holding a small orange octopus while standing on the beach looking out at the ocean.

This generated the great result above. This experiment has been fantastic, and I’ll continue to refine and use Dzine for my book creation.

📰 Article of The Week

Chain of Draft Paper and their code and examples - research paper on using Chain of Draft.

Large Language Models (LLMs) have demonstrated remarkable performance in solving complex reasoning tasks through mechanisms like Chain-of-Thought (CoT) prompting, which emphasizes verbose, step-by-step reasoning. However, humans typically employ a more efficient strategy: drafting concise intermediate thoughts that capture only essential information. In this work, we propose Chain of Draft (CoD), a novel paradigm inspired by human cognitive processes, where LLMs generate minimalistic yet informative intermediate reasoning outputs while solving tasks. By reducing verbosity and focusing on critical insights, CoD matches or surpasses CoT in accuracy while using as little as only 7.6% of the tokens, significantly reducing cost and latency across various reasoning tasks.

Xu, Silei and Xie, Wenhao and Zhao, Lingxiao and He, Pengcheng

The "Chain of Draft" paper from Zoom Communications researchers demonstrates a brilliantly simple yet powerful concept: LLMs can reason more efficiently with minimal prompting. 

What makes this research particularly fascinating is how it mirrors human cognitive processes. When tackling complex problems, humans rarely write extensive explanations for each step—we jot down concise notes capturing only essential insights. Chain of Draft applies this same principle to AI, showing that verbosity isn't necessary for effective reasoning. The results across arithmetic, common sense, and symbolic reasoning tasks confirm that LLMs can maintain their reasoning capabilities while drastically reducing their "thinking out loud."

The wider implications extend beyond token efficiency. This research reveals how we're still discovering fundamental ways to interact with existing models, unlocking capabilities that weren't explicitly designed but emerge naturally. Just as DeepSeek R1 changed how we think about training reasoning models, Chain of Draft is changing how we prompt them. It suggests we're only beginning to understand how to communicate effectively with these systems, with potentially many more efficiency breakthroughs waiting to be discovered through simple interaction pattern changes rather than building ever-larger models.

🌎 Where the World is Going

We're witnessing robots of all kinds silently infiltrating our society. Mechanical helpers are appearing everywhere, performing increasingly sophisticated tasks on manufacturing floors, warehouses, delivery routes, hospital hallways, and elsewhere. What fascinates me most is that we're just beginning to see the tip of the iceberg.

It's been captivating to watch Boston Dynamics create usable, trainable robot dogs and even more intriguing seeing Unitree's humanoid robots performing ninja-like movements (not exactly Bruce Lee). The videos are simultaneously awe-inspiring and disquieting. While I believe we're still years away from having Rosie from The Jetsons tidying our homes, the technology is accelerating at a breathtaking pace - propelled by the powerful combination of advancements in mechatronics and AI model training.

What's different now is the convergence. Robots have existed for decades, but they've been limited by rigid programming and environmental constraints. Today's robots aren't just mechanical marvels; they're vessels for increasingly sophisticated AI systems that can learn, adapt, and make decisions in unpredictable environments. Tesla's Optimus, Figure's humanoid assistant, and even the humble Roomba are benefiting from neural networks that continuously improve their capabilities.

When I think about the future, we often worry that AI will take our jobs, but perhaps robots—the physical manifestation of these AI systems—will ultimately transform the workplace. White-collar workers fear large language models, but maybe they should be watching the mechanical arms and legs being developed in research labs around the world.

Are we really that far off from Short Circuit, maybe Wall-E, or (hopefully not) Terminator? The gap is narrowing faster than most people realize. The question isn't whether robots will become ubiquitous in our society, but how we'll reshape our economic and social structures to accommodate our new mechanical colleagues. As I watch these developments unfold, I'm both excited and cautious about a future where the line between science fiction and reality continues to blur with each passing day.

If we get to robots that are sentient, I want Number 5!

👨‍💻 About Me

Just a Guy with An Ostrich

My name is Charlie Key. I love technology, building awesome stuff, and learning. I’ve built several software companies over the last twenty-plus years.

I’ve written this newsletter to help inspire and teach folks about AI. I hope you enjoy it.

➡️ Learn More About The Guy ⬅️