MCP for AI and more

Integration of data and services will make AI even more powerful

I’ll share which AI tools I’m exploring, some experiments I’m conducting, and insightful information about what I’m observing in the world every week.

🔧 Three Tools I’m Testing

📹 Argil - Video creation platform that you can train on yourself or others to create short form and medium length videos from text. Results have been okay but not something I’d put out in the world. 

💁‍♂️ ChatGPT Assistants (link to API) - These “agents” have a preset of instructions and then you pass the appropriate content to them to get the output. I’m using these to predefine instructions for social post creation. I’ll definitely continue to use these.

📸 Gemini 2.0 Flash Experimental Model - Playing around with their image generation, which is solid for its ability for consistency. Overall, I like the Gemini (Gemma) models in general.

🧪 AI Experiment of The Week

This week, I’ve been automating some of Losant’s social media postings. This experiment is based on a video tutorial from Futurepedia.

The first significant step was getting up and running on Make, the tool I’m using to build automation workflows. This tool has a metric ton of integrations with services to build automated scenarios. A scenario is built with a collection of “nodes,” which include tools, apps, and integrations to build logic-based flows.

I built two scenarios. The first watches a Slack channel for messages and then builds a summary of an article and its corresponding social posts. The scenario then drops these into a Google Sheet for review. A user provides feedback to approve the posts, and then a second scenario, watching the Google Sheet, posts the approved scenarios online.

Here are the scenarios in the Make tool.

Scenario for Creating Posts

Scenario for Posting Content

Make will be a significant automation tool for me going forward.

📰 Article of The Week

Replit's MCP: Everything You Need to Know - Replit's comprehensive guide to Model-Code-Platform

Replit's introduction of the Model-Code-Platform (MCP) architecture represents a paradigm shift in AI integration. Like REST, which standardizes web service communication, MCP is emerging as the crucial framework defining how AI models interact with coding platforms and runtime environments.

The broader implications of MCP extend far beyond software development. This architecture provides a blueprint for how AI systems integrate with any complex environment requiring contextual awareness. In education, healthcare, finance, and beyond, MCP's approach of continuous feedback and environmental awareness could transform how AI understands and responds to real-world constraints. The ability to provide additional data and context to AI solutions addresses the critical "hallucination" problem that has limited AI adoption in many industries.

What's particularly exciting is seeing how developers are connecting MCP concepts to Cursor, creating that coveted "vibe coding" experience where AI and human developers flow together seamlessly. Cursor's ability to understand project context combined with MCP's feedback loop is proving to be a powerful combination. Developers report that when Cursor implements MCP principles, the tool shifts from merely suggesting code to genuinely understanding the project's architecture and runtime behavior. This integration transforms the development workflow by reducing the gap between ideation and implementation. As more coding environments adopt MCP architecture, AI tools evolve from simple assistants to true collaborative partners that can reason about code within its full execution context.

🌎 Where the World is Going

I had a fascinating conversation with a senior engineer this week that's been living rent-free in my mind ever since. We discussed how AI is reshaping skill development, and he pointed out something profound: AI might create a chasm between subject matter experts and novices by eliminating the middle ground where expertise traditionally develops.

Think about a junior software developer today. In the past, they'd cut their teeth on system design challenges, debugging sessions, and architecture decisions. They'd make mistakes, learn from mentors, and gradually build intuition. But now? AI can generate reasonably good system architecture or boilerplate code in seconds. The novice gets a working solution without truly understanding why it works. Meanwhile, the experts who trained for years recognize both the brilliance and limitations in the AI's output. The middle ground—where deep learning happens through struggle—is vanishing.

This pattern extends far beyond software. Consider writing, research, design, or even music composition. The traditional apprenticeship model involves years of deliberate practice, but AI offers shortcuts that deliver acceptable results without the corresponding skill development. We're potentially creating two classes of professionals: true experts who deeply understand the fundamentals and can direct AI tools with sophistication, and AI-dependent practitioners who can produce output but lack the foundational knowledge to truly innovate or understand edge cases.

We might need to deliberately create "AI-free zones" in education and early career development—spaces where people can develop fundamental skills before introducing AI assistance. Otherwise, we risk a world where knowledge becomes increasingly concentrated among those who developed expertise before AI arrived, creating a widening gap between those who genuinely understand and those who merely operate the tools that understand for them.

👨‍💻 About Me

Just a Guy with An Ostrich

My name is Charlie Key. I love technology, building awesome stuff, and learning. I’ve built several software companies over the last twenty-plus years.

I’ve written this newsletter to help inspire and teach folks about AI. I hope you enjoy it.

➡️ Learn More About The Guy ⬅️