Skip to content

Chapter 3

Issues to consider with foundational models

Responsible AI
Foundation models introduce new challenges in ensuring responsible AI throughout the development cycle, including issues of accuracy, fairness, intellectual property, toxicity, and privacy. These challenges arise from the vast size and open-ended nature of foundation models compared to traditional machine learning approaches.

Another concern is hallucination, where Large-Language Models (LLMs) generate inaccurate responses inconsistent with training data due to the way they represent inputs.

High Costs
Another issue is the high costs associated with real-time inference generation, prompting the need for smaller, more fine-tuned models for specific use cases.

Emerging concerns with foundation models include toxicity, where generated content may be offensive or inappropriate, and intellectual property considerations, as LLMs occasionally produce verbatim passages from training data. Despite these challenges, ongoing efforts such as user education, content filtering, and technical solutions like watermarking and differential privacy are being developed to address them.