AI is everywhere – but where did it come from? (Part 2)
Welcome back! In part 1, we delved into the early milestones of the AI journey, including the groundbreaking work of Alan Turing, the pioneering chatbot Eliza, and the origins of the term “Artificial Intelligence.” Now, let’s dive deeper into the history and explore the developments that have led us to our current standing in the world of AI:
Despite all the advancement, the field of AI quickly started to face the first obstacles. The progress plateaued and much of the early advancement, even though still impressive, proved impractical in real-world applications. The imagination and possibilities of what could be done with AI was endless, but the technology was just not there yet. Funding for AI research was drastically reduced and especially research in neural networks (yes, the networks from the beginning of this article), was stopped. Scientists had still not figured out how to train them, which made them quite useless. We entered the first AI Winter.
In the 80s, a technology called “Expert System” shortly stopped this winter period. Expert Systems were basically reasoning machines that, for example, would support doctors in diagnosing patients by analyzing their symptoms and providing accurate recommendations. This time, not only the scientific world got excited about what happened to AI, but also the economy. Funding increased again and an AI industry was formed, mainly manufacturing Expert Systems at that time.
However, only a decade later, these expert systems proved to be too expensive. They were difficult to update, failed catastrophically when they were presented unusual input and were not able to learn. Also other AI research failed to deliver the promised results. And with this, the next hibernation period started again, the second AI winter.
But this was not just any low funding period. AI actually became a problematic term. Researchers avoided labeling their work as “AI” and instead used more specific terms like machine learning or informatics. Pursuing AI research in that time was seen as chasing a failed dream. But this doesn’t mean that no advancements were made. During this second winter period, AI technologies were added to a lot of software but just not labeled as such. Thus, while you might feel that AI entered our lives overnight and suddenly is everywhere, it was around all the time, just under different names.
I am not taking anything away if I say that tables turned once more. One of the big turning points came in 2012 with the AlexNet, a neural network that set an impressive new benchmark for recognizing what an image showed. The model combined several new advancements in the field of machine learning together with advancements in computer hardware and the availability of more and more data. The AI community finally had the means to successfully train neural networks in practice!
From that moment on, it mainly went uphill. Models became larger and larger and the field also benefited from hardware improvements. Terms like Big Data and Deep Learning quickly dominated the discussion. We finally made it out of the AI winter and funding for AI products and hardware skyrocketed up to 8 billion dollars. Models started to rival humans in different tasks, especially in computer vision. In the field of language processing, the main advancements were answering trivia questions and after having beat Kasparov in 1997, AI also defeated the best human Go players, a game much more complex and intricate than chess.
Things were looking good for AI, and it only started. In 2017, a neural network architecture called “Transformers” (the T in GPT) was proposed by researchers at Google. This architecture became quickly used in a lot of language models leading to massive leaps in the language processing sub-discipline of AI.
In 2020, OpenAI released the GPT-3 model, one of the most advanced AI models at that time, significantly larger than its predecessor. In contrast to the previous GPT models, they chose to not release it as an Open Source model out of ethical and safety concerns. Examples of texts generated by GPT-3 were absolutely astonishing at this time, even if they still struggled with repetition in longer texts.
Of course, GPT-3 was followed by GPT-3.5, was followed by GPT-4 and also the rest of the industry and research community caught up. And also image generation has reached new heights. Everything seems possible, again.
Don’t get me wrong, I believe we live in a fascinating time with AI pushing the boundaries of what we thought possible. There are a lot of possibilities and chances we have with AI. However, there are also a lot of legal, social and environmental challenges we are facing and should not forget in the big hype.
If the history of AI has shown us one thing it is that the human imagination and belief in the possibilities of AI are often years ahead of what we can actually realize. After the first machine translations were released to the public back in the late 50s, news already praised a speaking machine and many translators probably got afraid to lose their jobs. However, it soon became clear that it was just not usable for every-day tasks.
I think this highlights pretty well where we are now. AI has come a long way and so much seems possible. Why write a blog post if you can just ask a language model to do it for you? Why even bother learning to code if models are already doing it for us?
Because, we are just not there yet. The more I use AI tools, the more I realize that the last step is still missing. It is extremely advanced and can do a lot of things, but we still can and should not blindly trust its output. There is still a long way to go for AI and we humans still have our place in the world. I, for one, are happy to life in a time, where everything seems to be possible once again, but also am aware of the responsibility this technological advancement comes with. Now is the time to reshape how we work and life and see, how we can take only the good parts from it.