• Deep Research
  • Posts
  • Why We’re Building AI Systems, Not Just Models

Why We’re Building AI Systems, Not Just Models

The Flaws of Large Language Models Are Undeniable—Here’s Why I think AI Engines Will be the New Norm

Artificial intelligence (AI) has rapidly evolved, showcasing impressive capabilities, particularly through large language models (LLMs) like GPT. However, the future of AI lies not in isolated models but in creating compounding AI systems—integrated frameworks that combine the strengths of various AI technologies to address complex real-world problems. This philosophy is rooted in the insights of Professor Christopher Potts of Stanford University, who emphasizes the importance of building an AI "engine"—a system that evolves and adapts over time.

Understanding the Shift: Systems vs. Models

Large Language Models: The Starting Point

LLMs are powerful tools for generating human-like text, answering questions, and performing various tasks. They’ve transformed industries such as customer support, education, and creative writing. However, they come with significant limitations:

  • Accuracy Issues: LLMs can generate plausible but incorrect or biased information.

  • Context Limitations: Long or complex instructions can overwhelm their processing capabilities.

  • Ethical Concerns: Their outputs can unintentionally perpetuate societal biases or ethical oversights.

For example, in healthcare, relying solely on an LLM for medical advice could lead to harmful outcomes due to its inability to verify medical accuracy.

Compounding AI Systems: A Holistic Approach

In contrast, compounding AI systems integrate multiple AI components, each specializing in a specific task, to create a cohesive, adaptable, and efficient framework. These systems resemble engines, where every part contributes to the overall functionality.

For instance, in a healthcare application:

  1. Retrieval Models pull precise, verified medical information from a database.

  2. LLMs interpret patient queries and formulate responses.

  3. Decision Support Tools ensure the recommendations align with clinical guidelines.

This integrated approach ensures that the system is not only accurate but also contextually relevant and ethically compliant.

The Key Advantages of Compounding AI Systems

  1. Enhanced Performance Through Collaboration Compounding systems capitalize on the strengths of individual AI components. For example:

    • AlphaCode, a compounding system, outperformed standalone LLMs in coding competitions by using a combination of sampling, filtering, and ranking techniques.

  2. Flexibility and Scalability Unlike standalone models, systems can adapt to diverse tasks by incorporating domain-specific tools. For instance, in legal contexts, retrieval-augmented systems can efficiently analyze thousands of legal precedents while summarizing them in layman’s terms.

  3. Resource Efficiency By distributing tasks among components, compounding systems optimize resources like computational power, latency, and cost. Tools such as LangChain and FrugalGPT allow for fine-tuning resource allocation.

  4. Synergy Across Technologies Integration of AI technologies creates a synergistic effect. For example:

    • In marketing, retrieval-based systems identify consumer trends.

    • LLMs craft personalized content.

    • Analytics tools measure engagement to refine strategies.

Why Systems Matter: A Relatable Analogy

Think of building a compounding AI system like assembling a team of specialists for a complex project:

  • A single model (like an LLM) is akin to hiring one expert who tries to do everything. While capable, they may lack the depth required for certain tasks.

  • A compounding system is like a team where each member has a unique role—data analysts, project managers, and subject-matter experts. Together, they deliver better results by working in tandem.

Real-World Applications of Compounding AI Systems

Healthcare

A compounding system in healthcare might include:

  • Symptom Checkers powered by LLMs.

  • Diagnostic Tools leveraging machine learning algorithms.

  • Scheduling Assistants that use natural language processing to streamline appointments.

This combination not only enhances patient care but also reduces administrative burdens.

Education

In virtual learning platforms, a compounding system could:

  1. Use LLMs to generate personalized lesson plans.

  2. Leverage retrieval systems to provide supplemental resources.

  3. Employ adaptive learning algorithms to monitor student progress and adjust teaching strategies.

Marketing

By combining:

  • Predictive analytics,

  • Content generation tools,

  • And customer behavior models, marketers can create campaigns that resonate deeply with their audiences, driving engagement and sales.

Challenges and Future Directions

While the benefits of compounding systems are clear, their implementation comes with challenges:

  1. Complex Design: Integrating multiple AI components requires careful orchestration.

  2. Monitoring and Governance: Ensuring reliability, fairness, and ethical compliance in these systems is essential.

  3. Cost and Scalability: Building and maintaining such systems demands significant resources.

Despite these challenges, the future of AI lies in systems that are adaptable, robust, and aligned with human values. By focusing on the interplay between components, developers can create engines that evolve to meet society’s needs.

The Wisdom of Building Engines

As Professor Christopher Potts highlights, the key to sustainable innovation lies in building systems, not standalone models. Just as a car engine is designed to function harmoniously with all its parts, a compounding AI system is an adaptable, dynamic engine capable of handling the complexities of real-world applications.

By shifting our focus from isolated models to integrated systems, we’re creating the foundation for AI that is not only smarter but also safer, more reliable, and ultimately more human-centric.