The Legacy of ELIZA
Recent dramatic advancements in Machine Learning (ML) and Natural Language Processing (NLP) have led to a flood of news articles and opinion pieces about the state of Artificial Intelligence (AI) technology. Generally, the breathless assessments range from ‘the best thing ever’ to ‘complete doomsday scenario’.
The level of the negative hype surpassed that of Y2K when the wrong date-handling code embedded in every device was to have been so confused by the rollover from 1999 to 2000 that the world would come crashing to a halt, figuratively if not literally. Elevators would drop from the top floors of every building, trains would derail, planes would fall from the sky, and microwave ovens would do who-knows-what to us all. That, or nothing at all, would happen, depending on who was hypothesizing. In hindsight, while it is apparent that the risk of Y2K was mostly overblown, it is helpful to remember that it is at least partly due to the amount of preparation and investment that went towards avoiding the worst potential outcomes.
We are currently witnessing a significant and rapid growth in the capabilities of AI, in some cases on a near-daily basis. It is driven by huge investments from many top U.S. tech companies and fueled by an unprecedented public interest in heading-grabbing examples like Open AI’s ChatGPT and DALL-E, Google’s Bard, Microsoft’s Bing, and many others.
These solutions are based on large-language models (LLMs) or large-image models (LIMs), which take vast amounts of text- or image-based data and use it to train a machine-learning model. The resulting LLM or LIM can then be used to generate output in the form of writing or pictures, which can be both beneficial and often convincingly human (aka Generative AI)
There has long been a keen interest in creating machines that can ‘exhibit cognitive capabilities and conversational skills’ to rival human beings. The first functional chatbot was ELIZA, a natural language processing program designed at MITby Joseph Weizenbaum in the mid-1960s.
The program prompts the user to enter a comment or question, and ELIZA’s response algorithm looks for specific keywords in the input and pseudo-randomly picks a response from a pre-defined list of phrases. It repeats the prompt-and-respond loop until the program is terminated. The result resembles a back-and-forth ‘conversation’ between the user and the computer. That made ELIZA one of the first programs that could even attempt to pass the famous Turing Test, in which ‘artificial intelligence’ is measured by the ability of a machine to convince humans that it is ‘human’ as well.
ELIZA’s method of processing input keywords and generating output phrases is designed to mimic a person-centred psychotherapy approach in the style of Carl Rogers, in which the therapist often reflects the patient’s words to the patient. That’s pretty much the same thing that all modern LLMs do, and they have a keyword list that’s a billion times bigger than ELIZA’s and a much better method for calculating responses. That means they can be much more human-like in their responses and much broader regarding the types of input they can successfully process. They can be so convincing that purveyors of public AI solutions have had to inject more caveats continually, disclaimers, and external references to limit potential harm and liability.
While today’s publicly available generative AI solutions are far superior to ELIZA in nearly every conceivable way, there is one significant advantage that ELIZA still has – PRIVACY. When you share your most important secrets with your computer-based therapist, and it’s ELIZA, you can be sure that it will keep your input safe (assuming a fully secured or entirely disconnected computing environment).
ELIZA’s entire codebase is publicly available, making the input-to-output process transparent and understandable. Unfortunately, the same can’t be said if your computer-based therapist is ChatGPT, Bard, Bing, or any other Generative AI offering. With them, you can be sure that the therapist-patient confidentiality agreement WILL be broken when the AI engine adds whatever comments, questions, and secrets you share into its gigantic database of material used to fuel its ongoing learning processes.
If you want to avoid, or at least minimize, the risks and pitfalls that come along with the potential productivity benefits of public generative AI, remembering past lessons is helpful. ELIZA provided a preview of modern chatbots and other generative AI solutions and served as an excellent example of the main differences between public (external) and private (internal only) artificial intelligence implementations. Y2K showed that it is better to prepare for the worst than to be unpleasantly surprised by it. Mainly when ‘worst’ might someday actually include killer robots from the future.
DISCLAIMER
Copyright ©2023 by DivIHN Integration Inc. | [email protected].