The evolution of computing and AI can be described as four stages. Early computing was fully deterministic: given fixed inputs, systems like spreadsheets and databases always produced the same, guaranteed outputs. As big data and machine learning advanced, inputs became deterministic but were processed by probabilistic models (like those used in targeted ads), though outputs still appeared deterministic to end users. With today’s generative AI (e.g. OpenAI, Anthropic), inputs, internal processes, and outputs are all probabilistic, leading to creative yet sometimes incorrect or inconsistent responses.
The next goal is to achieve probabilistic understanding with deterministic outcomes. This means combining flexible, learned models with methods that ensure reliable, verifiable outputs, such as symbolic reasoning, knowledge grounding, or external APIs. The objective is to retain the adaptability and intelligence of modern AI while guaranteeing correctness, a critical need in high-stakes fields like finance, healthcare, and law. Companies like OpenAI, Microsoft, IBM, and Anthropic are already moving in this direction, developing “hybrid” AI approaches to deliver trustworthy, consistent results without sacrificing the benefits of probabilistic reasoning.